pypeit.core.coadd module
Coadding module.
- pypeit.core.coadd.coadd_iexp_qa(wave, flux, rejivar, mask, wave_stack, flux_stack, ivar_stack, mask_stack, outmask, norder=None, title='', qafile=None)[source]
Routine to creqate QA for showing the individual spectrum compared to the combined stacked spectrum indicating which pixels were rejected.
- Parameters
wave (numpy.ndarray) – Wavelength array for spectrum of the exposure in question. Shape is (nspec,).
flux (numpy.ndarray) – Flux for the exposure in question. Shape is (nspec,).
ivar (numpy.ndarray) – Inverse variance for the exposure in question. Shape is (nspec,).
mask (numpy.ndarray) – Boolean array with mask for the exposure in question True=Good. If not specified determined form inverse variance. Shape is (nspec,).
flux_stack (numpy.ndarray) – Stacked spectrum to be compared to the exposure in question. Shape is (nspec,).
ivar_stack (numpy.ndarray) – Inverse variance of the stacked spectrum. Shape is (nspec,).
mask_stack (numpy.ndarray) – Boolean array with mask for stacked spectrum. Shape is (nspec,).
norder (
int
, optional) – Indicate the number of orders if this is an echelle stack. If None, …title (
str
, optional) – Plot titleqafile (
str
, optional) – QA file name
- pypeit.core.coadd.coadd_qa(wave, flux, ivar, nused, mask=None, tell=None, title=None, qafile=None)[source]
Routine to make QA plot of the final stacked spectrum. It works for both longslit/mulitslit, coadded individual order spectrum of the Echelle data and the final coadd of the Echelle data.
- Parameters
wave (numpy.ndarray) – one-d wavelength array of your spectrum; shape=(nspec,)
flux (numpy.ndarray) – one-d flux array of your spectrum; shape=(nspec,)
ivar (numpy.ndarray) – one-d ivar array of your spectrum; shape=(nspec,)
nused (numpy.ndarray) – how many exposures used in the stack for each pixel, the same size with flux shape=(nspec,)
mask (numpy.ndarray, optional) – boolean mask array for your spectrum; shape=(nspec,)
tell (numpy.ndarray, optional) – one-d telluric array for your spectrum; shape=(nspec,)
title (str, optional) – plot title
qafile (str, optional) – QA file name
- pypeit.core.coadd.combspec(waves, fluxes, ivars, masks, sn_smooth_npix, wave_method='linear', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, wave_grid_input=None, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, const_weights=False, maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, title='', debug=False, debug_scale=False, show_scale=False, show=False, verbose=True)[source]
Main driver routine for coadding longslit/multi-slit spectra. NEED LOTS MORE HERE
- Parameters
waves (numpy.ndarray) – Wavelength arrays for spectra to be stacked. shape=(nspec, nexp)
fluxes (numpy.ndarray) – Flux arrays for spectra to be stacked. shape=(nspec, nexp)
ivars (numpy.ndarray) – ivar arrays for spectra to be stacked. shape=(nspec, nexp)
sn_smooth_npix (int) – Number of pixels to median filter by when computing S/N used to decide how to scale and weight spectra
wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Deafult is ‘linear’ which creates a uniformly space grid in lambda. See docuementation on get_wave_grid for description of the options.
dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.
dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.
spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.
wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.
wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.
ref_percentile – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio
maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.
wave_grid_input (numpy.ndarray) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)
maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.
sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.
scale_method (str, optional) – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is ‘auto’ will let the code determine this automitically which works well.
hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested
sn_min_polyscale (float, optional, default = 2.0) – maximum SNR for perforing median scaling
sn_min_medscale (float, optional, default = 0.5) – minimum SNR for perforing median scaling
const_weights (bool, optional) – If True, apply constant weight
sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.
lower (float, optional, default=3.0) – lower rejection threshold for djs_reject
upper (float: optional, default=3.0) – upper rejection threshold for djs_reject
maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.
qafile (str, default=None) – Root name for QA, if None, it will be determined from the outfile
title (str, optional) – Title for QA plots
debug (bool, default=False) – Show all QA plots useful for debugging. Note there are lots of QA plots, so only set this to True if you want to inspect them all.
debug_scale (bool, default=False) – show interactive QA plots for the rescaling of the spectra
show (bool, default=False) – If True, show key QA plots or not
show_scale (bool, default=False) – If True, show interactive QA plots for the rescaling of the spectra
- Returns
wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)
wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)
flux_stack (numpy.ndarray) – Final stacked spectrum on wave_stack wavelength grid shape=(ngrid,)
ivar_stack (numpy.ndarray) – Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking. shape=(ngrid,)
mask_stack (numpy.ndarray) – Boolean mask for stacked spectrum on wave_stack wavelength grid. True=Good. shape=(ngrid,)
- pypeit.core.coadd.compute_coadd2d(ref_trace_stack, sciimg_stack, sciivar_stack, skymodel_stack, inmask_stack, thismask_stack, waveimg_stack, wave_grid, spat_samp_fact=1.0, maskdef_dict=None, weights='uniform', interp_dspat=True)[source]
Construct a 2d co-add of a stack of PypeIt spec2d reduction outputs.
Slits are ‘rectified’ onto a spatial and spectral grid, which encompasses the spectral and spatial coverage of the image stacks. The rectification uses nearest grid point interpolation to avoid covariant errors. Dithering is supported as all images are centered relative to a set of reference traces in trace_stack.
Todo
These docs appear out-of-date
- Parameters
ref_trace_stack (list) –
List of reference traces about which the images are rectified and coadded. If the images were not dithered then this reference trace can simply be the center of the slit:
slitcen = (slit_left + slit_righ)/2
If the images were dithered, then this object can either be the slitcen appropriately shifted with the dither pattern, or it could be the trace of the object of interest in each exposure determined by running PypeIt on the individual images. Shape is (nspec, nimgs).
sciimg_stack (list) – List of science images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
sciivar_stack (list) – List of inverse variance images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
skymodel_stack (list) – List of the model sky images, , each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
inmask_stack (list) – List of input good pixel masks (i.e. True values are good, False values are bad.), each of which is a boolean numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
waveimg_stack (list) – List of the wavelength images, , each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
weights (list or str, optional) – The weights used when combining the rectified images (see
weighted_combine()
). If weights is set to ‘uniform’ then a uniform weighting is used. Weights are broadast to the correct size of the image stacks (seebroadcast_weights()
), as necessary. If a list is passed it must have a length of nimgs. The individual elements of the list can either be floats, indiciating the weight to be used for each image, or arrays with shape = (nspec,) or shape = (nspec, nspat), indicating pixel weights for the individual images. Weights are broadast to the correct size of the image stacks (seebroadcast_weights()
), as necessary. (TODO: JFH I think the str option should be changed here, but am leaving it for now.)spat_samp_fact (float, optional) – Spatial sampling for 2d coadd spatial bins in pixels. A value > 1.0 (i.e. bigger pixels) will downsample the images spatially, whereas < 1.0 will oversample. Default = 1.0
loglam_grid (numpy.ndarray, optional) – Wavelength grid in log10(wave) onto which the image stacks will be rectified. The code will automatically choose the subset of this grid encompassing the wavelength coverage of the image stacks provided (see
waveimg_stack()
). Either loglam_grid or wave_grid must be provided.wave_grid (numpy.ndarray, optional) – Same as loglam_grid but in angstroms instead of log(angstroms). (TODO: Check units…)
maskdef_dict (
dict
, optional) – Dictionary containing all the maskdef info. The quantities saved are: maskdef_id, maskdef_objpos, maskdef_slitcen, maskdef_designtab. To learn what they are seeSlitTraceSet
datamodel.interp_dspat (bool, optional) – Interpolate in the spatial coordinate image to faciliate running through core.extract.local_skysub_extract. This can be slow. Default=True.
- Returns
Returns a dict with the following keys:
wave_bins:
dspat_bins:
wave_mid:
wave_min:
wave_max:
dspat_mid:
sciimg: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded science image
sciivar: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded inverse variance image with correct error propagation
imgminsky: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded sky subtracted image
outmask: bool ndarray shape = (nspec_coadd, nspat_coadd): Output mask for rectified and coadded images. True = Good, False=Bad.
nused: int ndarray shape = (nspec_coadd, nspat_coadd): Image of integers indicating the number of images from the image stack that contributed to each pixel
waveimg: float ndarray shape = (nspec_coadd, nspat_coadd): The averaged wavelength image corresponding to the rectified and coadded data.
dspat: float ndarray shape = (nspec_coadd, nspat_coadd): The average spatial offsets in pixels from the reference trace trace_stack corresponding to the rectified and coadded data.
nspec: int
nspat: int
maskdef_id: int
maskdef_slitcen: int
maskdef_objpos: int
maskdef_designtab: int
- Return type
- pypeit.core.coadd.compute_stack(wave_grid, waves, fluxes, ivars, masks, weights, min_weight=1e-08)[source]
Compute a stacked spectrum from a set of exposures on the specified wave_grid with proper treatment of weights and masking. This code uses np.histogram to combine the data using NGP and does not perform any interpolations and thus does not correlate errors. It uses wave_grid to determine the set of wavelength bins that the data are averaged on. The final spectrum will be on an ouptut wavelength grid which is not the same as wave_grid. The ouput wavelength grid is the weighted average of the individual wavelengths used for each exposure that fell into a given wavelength bin in the input wave_grid. This 1d coadding routine thus maintains the independence of the errors for each pixel in the combined spectrum and computes the weighted averaged wavelengths of each pixel in an analogous way to the 2d extraction procedure which also never interpolates to avoid correlating erorrs.
- Parameters
wave_grid (numpy.ndarray) – new wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)
waves (numpy.ndarray) – wavelength arrays for spectra to be stacked. Note that the wavelength grids can in general be different for each exposure and irregularly spaced. shape=(nspec, nexp)
fluxes (numpy.ndarray) – fluxes for each exposure on the waves grid shape=(nspec, nexp)
ivars (numpy.ndarray) – Inverse variances for each exposure on the waves grid shape=(nspec, nexp)
masks (numpy.ndarray) – Boolean masks for each exposure on the waves grid. True=Good. shape=(nspec, nexp)
weights (numpy.ndarray) – Weights to be used for combining your spectra. These are computed using sn_weights shape=(nspec, nexp)
min_weight (float, optional) – Minimum allowed weight for any individual spectrum
- Returns
wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)
flux_stack (numpy.ndarray) – Final stacked spectrum on wave_stack wavelength grid shape=(ngrid,)
ivar_stack (numpy.ndarray) – Inverse variance spectrum on wave_stack wavelength grid. Errors are propagated according to weighting and masking. shape=(ngrid,)
mask_stack (numpy.ndarray) – Boolean Mask for stacked spectrum on wave_stack wavelength grid. True=Good. shape=(ngrid,)
nused (numpy.ndarray) – Number of exposures which contributed to each pixel in the wave_stack. Note that this is in general different from nexp because of masking, but also becuse of the sampling specified by wave_grid. In other words, sometimes more spectral pixels in the irregularly gridded input wavelength array waves will land in one bin versus another depending on the sampling. shape=(ngrid,)
- pypeit.core.coadd.ech_combspec(waves, fluxes, ivars, masks, weights_sens, nbest=None, wave_method='log10', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, niter_order_scale=3, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, sn_smooth_npix=None, const_weights=False, maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, debug_scale=False, debug=False, show_order_stacks=False, show_order_scale=False, show_exp=False, show=False, verbose=False)[source]
Driver routine for coadding Echelle spectra. Calls combspec which is the main stacking algorithm. It will deliver three fits files: spec1d_order_XX.fits (stacked individual orders, one order per extension), spec1d_merge_XX.fits (straight combine of stacked individual orders), spec1d_stack_XX.fits (a giant stack of all exposures and all orders). In most cases, you should use spec1d_stack_XX.fits for your scientific analyses since it reject most outliers.
..todo.. – Clean up the doc formatting
- Parameters
waves (numpy.ndarray) – Wavelength arrays for spectra to be stacked. shape=(nspec, norder, nexp)
fluxes (numpy.ndarray) – Flux arrays for spectra to be stacked. shape=(nspec, norder, nexp)
ivars (numpy.ndarray) – ivar arrays for spectra to be stacked. shape=(nspec, norder, nexp)
masks (numpy.ndarray) – Mask array with shape (nspec, norders, nexp) containing the spectra to be coadded.
weights_sens (numpy.ndarray) – Sensitivity function weights required for relatively weighting of the orders. Must have the same shape as waves, etc.
nbest (int, optional) – Number of orders to use for estimating the per exposure weights. Default is nbest=None, which will just use one fourth of the orders.
wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Deafult is ‘log10’ which creates a uniformly space grid in log10(lambda), which is typically the best for echelle spectrographs
dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.
dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.
dloglam (float, optional) – Dispersion in dimensionless units
spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.
wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.
wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.
wave_grid_input (numpy.ndarray) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)
ref_percentile – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio
maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.
sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.
hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested
sn_min_polyscale (float, optional, default = 2.0) – maximum SNR for perforing median scaling
sn_min_medscale (float, optional, default = 0.5) – minimum SNR for perforing median scaling
const_weights (bool, optional) – If True, apply constant weight
maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.
sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to
lower (float, optional, default=3.0) – lower rejection threshold for djs_reject
upper (float: optional, default=3.0) – upper rejection threshold for djs_reject
maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.
- Returns
wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)
a_tuple (tuple) – (wave_giant_stack: ndarray, (ngrid,): Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_giant_stack is NOT simply the wave_grid bin centers, since it computes the weighted average; flux_giant_stack: ndarray, (ngrid,): Final stacked spectrum on wave_stack wavelength grid; ivar_giant_stack: ndarray, (ngrid,): Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking.; mask_giant_stack: ndarray, bool, (ngrid,): Mask for stacked spectrum on wave_stack wavelength grid. True=Good. )
another_tuple (tuple) – (waves_stack_orders, fluxes_stack_orders, ivars_stack_orders, masks_stack_orders, None)
- pypeit.core.coadd.get_spat_bins(thismask_stack, trace_stack, spat_samp_fact=1.0)[source]
Determine the spatial bins for a 2d coadd and relative pixel coordinate images. This routine loops over all the images being coadded and creates an image of spatial pixel positions relative to the reference trace for each image in units of the desired rebinned spatial pixel sampling spat_samp_fact. The minimum and maximum relative pixel positions in this frame are then used to define a spatial position grid with whatever desired pixel spatial sampling.
- Parameters
thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
ref_trace_stack (list) –
List of reference traces about which the images are rectified and coadded. If the images were not dithered then this reference trace can simply be the center of the slit:
slitcen = (slit_left + slit_righ)/2
If the images were dithered, then this object can either be the slitcen appropriately shifted with the dither pattern, or it could be the trace of the object of interest in each exposure determined by running PypeIt on the individual images. The list has nimgs elements, each of which is a 1D numpy.ndarray of shape (nspec,).
spat_samp_fact (float, optional) – Spatial sampling for 2d coadd spatial bins in pixels. A value > 1.0 (i.e. bigger pixels) will downsample the images spatially, whereas < 1.0 will oversample. Default = 1.0
- Returns
dspat_bins (numpy.ndarray) – shape (spat_max_int +1 - spat_min_int,) Array of spatial bins for rectifying the image.
dspat_stack (numpy.ndarray) – shape (nimgs, nspec, nspat) Image stack which has the spatial position of each exposure relative to the trace in the trace_stack for that image.
- pypeit.core.coadd.get_wave_bins(thismask_stack, waveimg_stack, wave_grid)[source]
Utility routine to get the wavelength bins for 2d coadds from a mask
- Parameters
thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
waveimg_stack (list) – List of the wavelength images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.
wave_grid (array shape (ngrid)) – The wavelength grid created for the 2d coadd
- Returns
wave_bins – shape (ind_upper-ind_lower + 1, ) Wavelength bins that are relevant given the illuminated pixels (thismask_stack) and wavelength coverage (waveimg_stack) of the image stack
- Return type
- pypeit.core.coadd.get_wave_ind(wave_grid, wave_min, wave_max)[source]
Utility routine used by coadd2d to determine the starting and ending indices of a wavelength grid.
- Parameters
wave_grid (numpy.ndarray) – Wavelength grid.
wave_min (float) – Minimum wavelength covered by the data in question.
wave_max (float) – Maximum wavelength covered by the data in question.
- Returns
ind_lower (int) – Integer lower indice corresponding to wave_min
ind_upper (int) – Integer upper indice corresponding to wave_max
- pypeit.core.coadd.get_ylim(flux, ivar, mask)[source]
Utility routine for setting the plot limits for QA plots.
- Parameters
flux (numpy.ndarray) – (nspec,) flux array
ivar (numpy.ndarray) – (nspec,) inverse variance array
mask (numpy.ndarray) – bool, (nspec,) mask array. True=Good
- Returns
lower and upper limits for plotting.
- Return type
- pypeit.core.coadd.interp_oned(wave_new, wave_old, flux_old, ivar_old, gpm_old, sensfunc=False)[source]
Interpolate a 1D spectrum onto a new wavelength grid.
Interpolation is done using scipy.interpolate.interp1d with
cubic
interpolation. Any wavelengths inwave_new
that are beyond the range ofwave_old
are set tonp.nan
and masked via the output good-pixel mask.Warning
Any wavelength in
wave_old
that is less than 1 is assumed to indicate that the wavelength is invalid!- Parameters
wave_new (numpy.ndarray) – New wavelength grid for the output spectra. Must be 1D.
wave_old (numpy.ndarray) – Old wavelength grid. Must be 1D, need not have the same size as
wave_new
.flux_old (numpy.ndarray) – Old flux on the wave_old grid. Shape must match
wave_old
.ivar_old (numpy.ndarray) – Old ivar on the wave_old grid. Shape must match
wave_old
.gpm_old (numpy.ndarray) – Old good-pixel mask (True=Good) on the wave_old grid. Shape must match
wave_old
.sensfunc (
bool
, optional) – If True, the quantitiesflux*delta_wave
and the correspondingivar/delta_wave**2
will be interpolated and returned instead offlux
andivar
. This is useful for sensitivity function computation where we need flux*(wavelength bin width). Beacause delta_wave is a difference of the wavelength grid, interpolating in the presence of masked data requires special care.
- Returns
Returns three numpy.ndarray objects with the interpolated flux, inverse variance, and good-pixel mask arrays with the length matching the new wavelength grid.
- Return type
- pypeit.core.coadd.interp_spec(wave_new, waves, fluxes, ivars, gpms, sensfunc=False)[source]
Interpolate a set of spectra onto a new wavelength grid.
The method can perform two types of interpolation, depending on the shapes of the input arrays.
If the new wavelength grid (
wave_new
) is 1D, all input spectra are interpolated to this new grid. The input spectra can be provided as either 1D or 2D arrays.If the new wavelength grid (
wave_new
) is 2D, all input spectra must be 1D. The single spectrum is then interpolated onto each of the new wavelength grids.
- Parameters
wave_new (numpy.ndarray, shape (nspec,) or (nspec, nimgs),) – New wavelength grid for output spectra. Shape can be 1D or 2D. See the method description for how this affects the code flow above.
waves (numpy.ndarray, shape (nspec,) or (nspec, nexp)) – Wavelength vector for current spectra. Shape can be 1D or 2D, where nexp, need not equal nimgs. See the method description for how this affects the code flow above.
fluxes (numpy.ndarray) – Flux vectors. Shape must match
waves
.ivars (numpy.ndarray) – Inverse variance vectors. Shape must match
waves
.gpms (numpy.ndarray) – Boolean good-pixel masks for each spectrum (True=Good). Shape must match
waves
.sensfunc (
bool
, optional) – If True, the quantitiesflux*delta_wave
and the correspondingivar/delta_wave**2
will be interpolated and returned instead offlux
andivar
. This is useful for sensitivity function computation where we need flux*(wavelength bin width). Beacause delta_wave is a difference of the wavelength grid, interpolating in the presence of masked data requires special care.
- Returns
fluxes_inter (numpy.ndarray) – interpolated flux with size and shape matching the new wavelength grid.
ivars_inter (numpy.ndarray) – interpolated inverse variance with size and shape matching the new wavelength grid.
gpms_inter (numpy.ndarray) – interpolated good-pixel mask with size and shape matching the new wavelength grid.
- pypeit.core.coadd.median_filt_spec(flux, ivar, gpm, med_width)[source]
Utility routine to median filter a spectrum using the mask and propagating the errors using the utils.fast_running_median function.
- Parameters
flux (numpy.ndarray) – flux array with shape (nspec,)
ivar (numpy.ndarray) – inverse variance with shape (nspec,)
gpm (numpy.ndarray) – Boolean mask on the spectrum with shape (nspec,). True = good
med_width (float) – width for median filter in pixels
- Returns
flux_med (numpy.ndarray) – Median filtered flux
ivar_med (numpy.ndarray) – corresponding propagated variance
- pypeit.core.coadd.multi_combspec(waves, fluxes, ivars, masks, sn_smooth_npix=None, wave_method='linear', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, const_weights=False, maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, debug=False, debug_scale=False, show_scale=False, show=False)[source]
Routine for coadding longslit/multi-slit spectra. Calls combspec which is the main stacking algorithm.
- Parameters
waves (numpy.ndarray) – Wavelength array with shape (nspec, nexp) containing the spectra to be coadded.
fluxes (numpy.ndarray) – Flux array with shape (nspec, nexp) containing the spectra to be coadded.
ivars (numpy.ndarray) – Ivar array with shape (nspec, nexp) containing the spectra to be coadded.
masks (numpy.ndarray) – Maks array with shape (nspec, nexp) containing the spectra to be coadded.
sn_smooth_npix (int) – optional Number of pixels to median filter by when computing S/N used to decide how to scale and weight spectra. If set to None, the code will determine the effective number of good pixels per spectrum in the stack that is being co-added and use 10% of this neff.
wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Deafult is ‘linear’ which creates a uniformly space grid in lambda. See docuementation on get_wave_grid for description of the options.
dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.
dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.
spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.
wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.
wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.
wave_grid_input (numpy.ndarray) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)
maxiter_reject (int, optional) – optional maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached. Default=5.
ref_percentile (float, optional) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio. Should be a number between 0 and 100, default = 70.0
maxiter_scale (int, optional) – Maximum number of iterations performed for rescaling spectra. Default=5.
sigrej_scale (float, optional) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec. Default=3.0
scale_method (str, optional) – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default=’auto’ will let the code determine this automitically which works well.
hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested
sn_min_polyscale (float, optional) – maximum SNR for perforing median scaling
sn_min_medscale (float, optional) – minimum SNR for perforing median scaling
const_weights (numpy.ndarray, optional) – Constant weight factors specified
maxiter_reject – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.
sn_clip (float, optional) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.
lower (float, optional) – lower rejection threshold for djs_reject
upper (float, optional) – upper rejection threshold for djs_reject
maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.
nmaskedge (int, optional) – Number of edge pixels to mask. This should be removed/fixed.
qafile (str, optional) – optional, default=None Root name for QA, if None, it will be determined from the outfile
outfile (str, optional) – optional, default=None, Root name for QA, if None, it will come from the target name from the fits header.
debug (bool, optional) – optinoal, default=False, Show all QA plots useful for debugging. Note there are lots of QA plots, so only set this to True if you want to inspect them all.
debug_scale (bool, optional) – optional, default=False show interactive QA plots for the rescaling of the spectra
show (bool, optional) – optional, default=False, Show key QA plots or not
- Returns
wave_grid_mid: numpy.ndarray, (ngrid,): Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more.
wave_stack: numpy.ndarray, (ngrid,): Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average.
flux_stack: numpy.ndarray, (ngrid,): Final stacked spectrum on wave_stack wavelength grid
ivar_stack: numpy.ndarray, (ngrid,): Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking.
mask_stack: numpy.ndarray, bool, (ngrid,): Mask for stacked spectrum on wave_stack wavelength grid. True=Good.
- Return type
- pypeit.core.coadd.order_median_scale(waves, fluxes, ivars, masks, min_good=0.05, maxiters=5, max_factor=10.0, sigrej=3, debug=False, show=False)[source]
Function to scaling different orders by the median S/N
- Parameters
waves (numpy.ndarray) – wavelength array of your spectra with the shape of (nspec, norder)
fluxes (numpy.ndarray) – flux array of your spectra with the shape of (nspec, norder)
ivars (numpy.ndarray) – ivar array of your spectra with the shape of (nspec, norder)
masks (numpy.ndarray) – mask for your spectra with the shape of (nspec, norder)
min_good (float, optional) – minimum fraction of the total number of good pixels needed for estimate the median ratio
maxiters (int or float, optional) – maximum iterations for rejecting outliers
max_factor (float, optional) – maximum scale factor
sigrej (float, optional) – sigma used for rejecting outliers
debug (bool, optional) – if True show intermediate QA
show (bool, optional) – if True show the final QA
- Returns
(1) fluxes_new (numpy.ndarray): re-scaled fluxes with the shape of (nspec, norder). (2) ivars_new (numpy.ndarray): re-scaled ivars with the shape of (nspec, norder) (3) order_ratios (numpy.ndarray): an array of scale factor with the length of norder
- Return type
- pypeit.core.coadd.poly_model_eval(theta, func, model, wave, wave_min, wave_max)[source]
Routine to evaluate the polynomial fit
- Parameters
theta (numpy.ndarray) – coefficient parameter vector of type=float
func (str) – polynomial type
model (str) – model type, valid model types are ‘poly’, ‘square’, or ‘exp’, corresponding to normal polynomial, squared polynomial, or exponentiated polynomial
wave (numpy.ndarray) – array of wavelength values of type=float
wave_min (float) – minimum wavelength for polynomial fit range
wave_max (float) – maximum wavelength for polynomial fit range
- Returns
Array of evaluated polynomial with same shape as wave
- Return type
- pypeit.core.coadd.poly_ratio_fitfunc(flux_ref, gpm, arg_dict, init_from_last=None, **kwargs_opt)[source]
Function to be optimized by robust_optimize for solve_poly_ratio polynomial rescaling of one spectrum to match a reference spectrum. This function has the correct format for running robust_optimize optimization. In addition to running the optimization, this function recomputes the error vector ivartot for the error rejection that takes place at each iteration of the robust_optimize optimization. The ivartot is also renormalized using the renormalize_errors function enabling rejection. A scale factor is multiplied into the true errors to allow one to reject based on the statistics of the actual error distribution.
- Args:
- flux_ref (numpy.ndarray):
Reference flux that we are trying to rescale our spectrum to match
- gpm (numpy.ndarray):
Boolean array with mask for the current iteration of the optimization. True=good
- arg_dict (
dict
): dictionary containing arguments for the optimizing function. See poly_ratio_fitfunc_chi2 for how arguments are used. They are mask, flux_med, flux_ref_med, ivar_ref_med, wave, wave_min, wave_max, func
- init_from_last (obj, optional):
Use this scipy optimization object from a previous iteration as the guess
- kwargs_opt (
dict
): arguments to be passed to the optimizer, which in this case is just vanilla scipy.minimize with the default optimizer
- Returns
Three objects are returned. (1) scipy optimization object, (2) scale factor to be applied to the data to match the reference spectrum flux_ref, (3) error vector to be used for the rejection that takes place at each iteration of the robust_optimize optimization
- Return type
- pypeit.core.coadd.poly_ratio_fitfunc_chi2(theta, gpm, arg_dict)[source]
Function for computing the chi^2 loss function for solving for the polynomial rescaling of one spectrum to another. There are two non-standard things implemented here which increase ther robustness. The first is a non-standard error used for the chi, which adds robustness and increases the stability of the optimization. This was taken from the idlutils solve_poly_ratio code. The second thing is that the chi is remapped using the scipy huber loss function to reduce sensitivity to outliers, ased on the scipy cookbook on robust optimization.
- Parameters
theta (numpy.ndarray) – parameter vector for the polymomial fit
gpm (numpy.ndarray) – boolean mask for the current iteration of the optimization, True=good
arg_dict (dict) – dictionary containing arguments
- Returns
this is effectively the chi^2, i.e. the quantity to be minimized by the optimizer. Note that this is not formally the chi^2 since the huber loss function re-maps the chi to be less sensitive to outliers.
- Return type
- pypeit.core.coadd.rebin2d(spec_bins, spat_bins, waveimg_stack, spatimg_stack, thismask_stack, inmask_stack, sci_list, var_list)[source]
Rebin a set of images and propagate variance onto a new spectral and spatial grid. This routine effectively “recitifies” images using np.histogram2d which is extremely fast and effectively performs nearest grid point interpolation.
- Parameters
spec_bins (numpy.ndarray) – Spectral bins to rebin to. float, shape = (nspec_rebin)
spat_bins (numpy.ndarray) – Spatial bins to rebin to. float ndarray, shape = (nspat_rebin)
waveimg_stack (numpy.ndarray) – Stack of nimgs wavelength images with shape = (nspec, nspat) each float , shape = (nimgs, nspec, nspat)
spatimg_stack (numpy.ndarray) – Stack of nimgs spatial position images with shape = (nspec, nspat) each float, shape = (nimgs, nspec, nspat)
thismask_stack (numpy.ndarray) – Stack of nimgs images with shape = (nspec, nspat) indicating the locatons on the pixels on an image that are on the slit in question. bool, shape = (nimgs, nspec, nspat)
inmask_stack (numpy.ndarray) – Stack of nimgs images with shape = (nspec, nspat) indicating which pixels on an image are masked. True = Good, False = Bad bool ndarray, shape = (nimgs, nspec, nspat)
sci_list (list) – Nested list of images, i.e. list of lists of images, where sci_list[i][j] is a shape = (nspec, nspat) where the shape can be different for each image. The ith index is the image type, i.e. sciimg, skysub, tilts, waveimg, the jth index is the exposure or image number, i.e. nimgs. These images are to be rebinned onto the commong grid.
var_list (list) – Nested list of variance images, i.e. list of lists of images. The format is the same as for sci_list, but note that sci_list and var_list can have different lengths. Since this routine performs a NGP rebinning, it effectively comptues the average of a science image landing on a pixel. This means that the science is weighted by the 1/norm_rebin_stack, and hence variances must be weighted by that factor squared, which his why they must be input here as a separate list.
- Returns
sci_list_out (list. The list of ndarray rebinned images) – with new shape (nimgs, nspec_rebin, nspat_rebin)
var_list_out (list. The list of ndarray rebinned variance) – images with correct error propagation with shape (nimgs, nspec_rebin, nspat_rebin)
norm_rebin_stack (int ndarray, shape (nimgs, nspec_rebin,) – nspat_rebin). An image stack indicating the integer occupation number of a given pixel. In other words, this number would be zero for empty bins, one for bins that were populated by a single pixel, etc. This image takes the input inmask_stack into account. The output mask for each image can be formed via outmask_rebin_satck = (norm_rebin_stack > 0)
nsmp_rebin_stack (int ndarray, shape (nimgs, nspec_rebin,) – nspat_rebin). An image stack indicating the integer occupation number of a given pixel taking only the thismask_stack into account, but taking the inmask_stack into account. This image is mainly constructed for bookeeping purposes, as it represents the number of times each pixel in the rebin image was populated taking only the “geometry” of the rebinning into account (i.e. the thismask_stack), but not the masking (inmask_stack).
- pypeit.core.coadd.renormalize_errors(chi, mask, clip=6.0, max_corr=5.0, title='', debug=False)[source]
Function for renormalizing errors. The distribution of input chi (defined by chi = (data - model)/sigma) values is analyzed, and a correction factor to the standard deviation sigma_corr is returned. This should be multiplied into the errors. In this way, a rejection threshold of i.e. 3-sigma, will always correspond to roughly the same percentile. This renormalization guarantees that rejection is not too agressive in cases where the empirical errors determined from the chi-distribution differ significantly from the noise model which was used to determine chi.
- Parameters
chi (numpy.ndarray) – input chi values
mask (numpy.ndarray) – True = good, mask for your chi array of type bool
mask – True = good, mask for your chi array of type bool
clip (float, optional) – threshold for outliers which will be clipped for the purpose of computing the renormalization factor
max_corr (float, optional) – maximum corrected sigma allowed.
title (str, optional) – title for QA plot, passed to renormalize_errors_qa
debug (bool, optional) – If True, show the QA plot created by renormalize_errors_qa
- Returns
(1) sigma_corr (float), corrected new sigma; (2) maskchi (numpy.ndarray, bool): new mask (True=good) which indicates the values used to compute the correction (i.e it includes clipping)
- Return type
- pypeit.core.coadd.renormalize_errors_qa(chi, maskchi, sigma_corr, sig_range=6.0, title: str = '', qafile: Optional[str] = None)[source]
Generate a histogram QA plot of the input chi distribution.
- Parameters
chi (numpy.ndarray) – your chi values
maskchi (numpy.ndarray) – True = good, mask for your chi array of type bool
sigma_corr (float) – corrected sigma
sig_range (float) – used to set binsize, default +- 6-sigma
title (str, optional) – plot title
qafile (str, optional) – Write figure to this output QA file, if provided
- pypeit.core.coadd.robust_median_ratio(flux, ivar, flux_ref, ivar_ref, mask=None, mask_ref=None, ref_percentile=70.0, min_good=0.05, maxiters=5, sigrej=3.0, max_factor=10.0, snr_do_not_rescale=1.0, verbose=False)[source]
Robustly determine the ratio between input spectrum flux and reference spectrum flux_ref. The code will perform best if the reference spectrum is chosen to be the higher S/N ratio spectrum, i.e. a preliminary stack that you want to scale each exposure to match. Note that the flux and flux_ref need to be on the same wavelength grid!!
- Parameters
flux (numpy.ndarray) – spectrum that will be rescaled. shape=(nspec,)
ivar (numpy.ndarray) – inverse variance for the spectrum that will be rescaled. Same shape as flux
flux_ref (numpy.ndarray) – reference spectrum. Same shape as flux
mask (numpy.ndarray, optional) – boolean mask for the spectrum that will be rescaled. True=Good. If not input, computed from inverse variance
ivar_ref (numpy.ndarray, optional) – inverse variance of reference spectrum.
mask_ref (numpy.ndarray, optional) – Boolean mask for reference spectrum. True=Good. If not input, computed from inverse variance.
ref_percentile (float, optional, default=70.0) – Percentile fraction used for selecting the minimum SNR cut from the reference spectrum. Pixels above this percentile cut are deemed the “good” pixels and are used to compute the ratio. This must be a number between 0 and 100.
min_good (float, optional, default = 0.05) – Minimum fraction of good pixels determined as a fraction of the total pixels for estimating the median ratio
maxiters (int, optional, default = 5) – Maximum number of iterations for astropy.stats.SigmaClip
sigrej (float, optional, default = 3.0) – Rejection threshold for astropy.stats.SigmaClip
max_factor (float, optional, default = 10.0,) – Maximum allowed value of the returned ratio
snr_do_not_rescale (float, optional default = 1.0) – If the S/N ratio of the set of pixels (defined by upper ref_percentile in the reference spectrum) in the input spectrum have a median value below snr_do_not_rescale, median rescaling will not be attempted and the code returns ratio = 1.0. We also use this parameter to define the set of pixels (determined from the reference spectrum) to compare for the rescaling.
- Returns
ratio – the number that must be multiplied into flux in order to get it to match up with flux_ref
- Return type
- pypeit.core.coadd.scale_spec(wave, flux, ivar, sn, wave_ref, flux_ref, ivar_ref, mask=None, mask_ref=None, scale_method='auto', min_good=0.05, ref_percentile=70.0, maxiters=5, sigrej=3, max_median_factor=10.0, npoly=None, hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, debug=False, show=False)[source]
Routine for solving for the best way to rescale an input spectrum flux to match a reference spectrum flux_ref. The code will work best if you choose the reference to be the highest S/N ratio spectrum. If the scale_method is not specified, the code will make a decision about which method to use based on the input S/N ratio.
- Parameters
wave (numpy.ndarray) – wavelengths grid for the spectra of shape (nspec,)
flux (numpy.ndarray) – spectrum that will be rescaled.
ivar (numpy.ndarray) – inverse variance for the spectrum that will be rescaled.
sn (float) – S/N of the spectrum that is being scaled used to make decisions about the scaling method. This can be computed by sn_weights and passed in.
flux_ref (numpy.ndarray, (nspec,)) – reference spectrum.
ivar_ref (numpy.ndarray, (nspec,)) – inverse variance of reference spectrum.
mask (numpy.ndarray) – Boolean mask for the spectrum that will be rescaled. True=Good. If not input, computed from inverse variance
mask_ref (numpy.ndarray) – Boolean mask for reference spectrum. True=Good. If not input, computed from inverse variance.
min_good (float, optional, default = 0.05) – minimum fraction of the total number of good pixels needed for estimate the median ratio
maxiters (int, optional) – maximum number of iterations for rejecting outliers used by the robust_median_ratio routine if median rescaling is the method used.
max_median_factor (float, optional, default=10.0) – maximum scale factor for median rescaling for robust_median_ratio if median rescaling is the method used.
sigrej (float, optional, default=3.0) – rejection threshold used for rejecting outliers by robsut_median_ratio
ref_percentile (float, optional, default=70.0) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio
npoly (int, optional, default=None) – order for the poly ratio scaling if polynomial rescaling is the method used. Default is to automatically compute this based on S/N ratio of data.
scale_method (str, optional) – scale method, str, default=’auto’. Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is to let the code determine this automitically which works well.
hand_scale (numpy.ndarray, optional) – array of hand scale factors, not well tested. shape=(nexp,)
sn_min_polyscale (float, optional, default=2.0) – maximum SNR for perforing median scaling
sn_min_medscale (float, optional, default=0.5) – minimum SNR for perforing median scaling
debug (bool, optional, default=False) – show interactive QA plot
- Returns
Multiple items –
(1) flux_scale: ndarray (nspec,) scaled spectrum; (2) ivar_scale: ndarray (nspec,) inverse variance for scaled spectrum; (3) scale: numpy.ndarray (nspec,) scale factor applied to the spectrum and inverse variance; (4) scale_method: str, method that was used to scale the spectra.
- Return type
- pypeit.core.coadd.scale_spec_qa(wave, flux, ivar, wave_ref, flux_ref, ivar_ref, ymult, scale_method, mask=None, mask_ref=None, ylim=None, title='')[source]
QA plot for spectrum scaling.
- Parameters
wave (numpy.ndarray) – wavelength array for spectrum to be scaled and reference spectrum. shape=(nspec,)
flux (numpy.ndarray) – flux for spectrum to be scaled; shape=(nspec,)
ivar (numpy.ndarray) – inverse variance for spectrum to be scaled. shape=(nspec,)
wave_ref (numpy.ndarray) – reference wavelengths; shape=(nspec,)
flux_ref (numpy.ndarray) – reference flux; shape=(nspec,)
ivar_ref (numpy.ndarray) – inverse variance of reference flux; shape=(nspec,)
ymult (numpy.ndarray) – scale factor array; shape=(nspec,)
scale_method (str) – label of method used for rescaling which will be shown on QA plot.
mask (numpy.ndarray, optional) – Boolean mask for spectrum to be scaled. True=Good. If not specified determined form inverse variance shape=(nspec,)
mask_ref (numpy.ndarray, optional) – Boolean mask for reference flux. True=Good. shape=(nspec,)
ylim (tuple, optional) – tuple for limits of the QA plot. If None, will be determined automtically with get_ylim
title (str, optional) – QA plot title
- pypeit.core.coadd.scale_spec_stack(wave_grid, waves, fluxes, ivars, masks, sn, weights, ref_percentile=70.0, maxiter_scale=5, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, debug=False, show=False)[source]
THIS NEEDS A PROPER DESCRIPTION
- Parameters
wave_grid (numpy.ndarray) – New wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)
waves (numpy.ndarray) – wavelength arrays for spectra to be stacked. Note that the wavelength grids can in general be different for each exposure and irregularly spaced. shape=(nspec, nexp)
fluxes (numpy.ndarray) – fluxes for each exposure on the waves grid shape=(nspec, nexp)
ivars (numpy.ndarray) – Inverse variances for each exposure on the waves grid shape=(nspec, nexp)
masks (numpy.ndarray) – Bool masks for each exposure on the waves grid. True=Good. shape=(nspec, nexp)
sn (numpy.ndarray) – sn of each spectrum in the stack used to determine which scaling method should be used. This can be computed using sn_weights. shape=(nexp,)
sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.
ref_percentile (float, optional, default=70.0) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio
maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.
scale_method (str, optional, default='auto') – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is to let the code determine this automitically which works well.
hand_scale (numpy.ndarray, optional) – array of hand scale factors, not well tested
sn_min_polyscale (float, optional, default=2.0) – maximum SNR for perforing median scaling
sn_min_medscale (float, optional default=0.5) – minimum SNR for perforing median scaling
debug (bool, optional, default=False) – show interactive QA plot
- Returns
fluxes_scales (numpy.ndarray) – Scale factors applied to the fluxes shape=(nspec, nexp)
ivars_scales (numpy.ndarray) – Scale factors applied to the ivars; shape=(nspec, nexp)
scales (numpy.ndarray) – shape=(nspec, nexp); Scale factors applied to each individual spectrum before the combine computed by scale_spec
scale_method_used (list) – List of methods used for rescaling spectra.
- pypeit.core.coadd.smooth_weights(inarr, gdmsk, sn_smooth_npix)[source]
Smooth the input weights with a Gaussian 1D kernel.
- Parameters
inarr (numpy.ndarray) – S/N spectrum to be smoothed. shape = (nspec,)
gdmsk (numpy.ndarray) – Boolean mask of good pixels. shape = (nspec,)
sn_smooth_npix (float) – Number of pixels used for determining smoothly varying S/N ratio weights. The sigma of the kernel is set by sig_res = max(sn_smooth_npix / 10.0, 3.0)
- Returns
smoothed version of inarr.
- Return type
- pypeit.core.coadd.sn_weights(waves, fluxes, ivars, masks, sn_smooth_npix, const_weights=False, ivar_weights=False, relative_weights=False, verbose=False)[source]
Calculate the S/N of each input spectrum and create an array of (S/N)^2 weights to be used for coadding.
- Parameters
waves (numpy.ndarray) – Reference wavelength grid for all the spectra. If wave is a 1d array the routine will assume that all spectra are on the same wavelength grid. If wave is a 2-d array, it will use the individual. shape = (nspec,) or (nspec, nexp)
fluxes (numpy.ndarray) – Stack of (nspec, nexp) spectra where nexp = number of exposures, and nspec is the length of the spectrum.
ivars (numpy.ndarray) – Inverse variance noise vectors for the spectra; shape = (nspec, nexp)
masks (numpy.ndarray) – Mask for stack of spectra. True=Good, False=Bad; shape = (nspec, nexp)
sn_smooth_npix (float) – Number of pixels used for determining smoothly varying S/N ratio weights.
const_weights (bool, optional) – Use a constant weights for each spectrum?
ivar_weights (bool, optional) – Use inverse variance weighted scheme?
relative_weights (bool, optional) – Calculate weights by fitting to the ratio of spectra? Note, relative weighting will only work well when there is at least one spectrum with a reasonable S/N, and a continuum. RJC note - This argument may only be better when the object being used has a strong continuum + emission lines. The reference spectrum is assigned a value of 1 for all wavelengths, and the weights of all other spectra will be determined relative to the reference spectrum. This is particularly useful if you are dealing with highly variable spectra (e.g. emission lines) and require a precision better than ~1 per cent.
verbose (bool, optional) – Verbosity of print out.
- Returns
rms_sn (numpy.ndarray) – Root mean square S/N value for each input spectra; shape (nexp,)
weights (numpy.ndarray) – Weights to be applied to the spectra. These are signal-to-noise squared weights. shape = (nspec, nexp)
- pypeit.core.coadd.solve_poly_ratio(wave, flux, ivar, flux_ref, ivar_ref, norder, mask=None, mask_ref=None, scale_min=0.05, scale_max=100.0, func='legendre', model='square', maxiter=3, sticky=True, lower=3.0, upper=3.0, median_frac=0.01, ref_percentile=70.0, debug=False)[source]
Routine for solving for the polynomial rescaling of an input spectrum flux to match a reference spectrum flux_ref. The two spectra need to be defined on the same wavelength grid. The code will work best if you choose the reference to be the higher S/N ratio spectrum. Note that the code multiplies in the square of a polnomial of order norder to ensure positivity of the scale factor. It also operates on median filtered spectra to be more robust against outliers
- Parameters
wave (numpy.ndarray) – wavelength array of shape (nspec,). flux, ivar, flux_ref, and ivar_ref must all be on the same wavelength grid
flux (numpy.ndarray) – flux that you want to rescale to match flux_ref
ivar (numpy.ndarray) – inverse varaiance of the array that you want to rescale to match flux_ref
flux_ref (numpy.ndarray) – reference flux that you want to rescale flux to match.
ivar_ref (numpy.ndarray) – inverse variance for reference flux
norder (int) – Order of polynomial rescaling; norder=1 is a linear fit and norder must be >= 1 otherwise the code will fault.
mask (numpy.ndarray, optional) – boolean mask for spectrum that you want to rescale, True=Good
mask_ref (numpy.ndarray, optional) – boolean mask for reference flux
scale_min (float, optional) – minimum scaling factor allowed. default =0.05
scale_max (float, optional) – maximum scaling factor allowed. default=100.0
func (str, optional) – function you want to use. default=’legendre’
model (str, optional) – model type, valid model types are ‘poly’, ‘square’, or ‘exp’, corresponding to normal polynomial, squared polynomial, or exponentiated polynomial. default = ‘square’
maxiter (int, optional) – maximum number of iterations for robust_optimize. default=3
sticky (bool, optional) – whether you want the rejection to be sticky or not with robust_optimize. See docs for djs_reject for definition of sticky. default=True
lower (float, optional) – lower sigrej rejection threshold for robust_optimize. default=3.0
upper (float, optional) – upper sigrej rejection threshold for robust_optimize. default=3.0
median_frac (float, optional) – the code rescales median filtered spectra with ‘reflect’ boundary conditions. The with of the median filter will be median_frac*nspec, where nspec is the number of spectral pixels. default = 0.01,
debug (bool, optional) – If True, show interactive QA plot. default=False
- Returns
ymult (numpy.ndarray, (nspec,)) – rescaling factor to be multiplied into flux to match flux_ref.
fit_tuple (
tuple
) – Tuple with the polynomial coefficients, the minimum wavelength coordinate and maximum wavelength coordinate used in the fit.flux_rescale (numpy.ndarray, (nspec,)) – rescaled flux, i.e. ymult multiplied into flux.
ivar_rescale (numpy.ndarray, (nspec,)) – rescaled inverse variance
outmask (numpy.ndarray, bool, (nspec,)) – output mask determined from the robust_optimize optimization/rejection iterations. True=Good
- pypeit.core.coadd.spec_reject_comb(wave_grid, waves, fluxes, ivars, masks, weights, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, maxiter_reject=5, title='', debug=False, verbose=False)[source]
Routine for executing the iterative combine and rejection of a set of spectra to compute a final stacked spectrum.
- Parameters
wave_grid (numpy.ndarray) – new wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)
waves (numpy.ndarray) – wavelength arrays for spectra to be stacked. Note that the wavelength grids can in general be different for each exposure and irregularly spaced. shape=(nspec, nexp)
fluxes (numpy.ndarray) – fluxes for each exposure on the waves grid
ivars (numpy.ndarray) – Inverse variances for each exposure on the waves grid
masks (numpy.ndarray) – Boolean masks for each exposure on the waves grid. True=Good.
weights (numpy.ndarray) – Weights to be used for combining your spectra. These are computed using sn_weights
sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.
lower (float, optional, default=3.0) – lower rejection threshold for djs_reject
upper (float, optional, default=3.0) – upper rejection threshold for djs_reject
maxrej (int, optional, default=None) – maximum number of pixels to reject in each iteration for djs_reject.
maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.
title (str, optional) – Title for QA plot
debug (bool, optional, default=False) – Show QA plots useful for debugging.
verbose (bool, optional, default=False) – Level
- Returns
wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)
flux_stack (numpy.ndarray) – Final stacked spectrum on wave_stack wavelength grid shape=(ngrid,)
ivar_stack (numpy.ndarray) – Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking. shape=(ngrid,)
mask_stack (numpy.ndarray) – Boolean mask for stacked spectrum on wave_stack wavelength grid. True=Good. shape=(ngrid,)
outmask (numpy.ndarray) – Output bool mask with shape=(nspec, nexp) indicating which pixels are rejected in each exposure of the original input spectra after performing all of the iterations of combine/rejection
nused (numpy.ndarray) – Number of exposures which contributed to each pixel in the wave_stack. Note that this is in general different from nexp because of masking, but also becuse of the sampling specified by wave_grid. In other words, sometimes more spectral pixels in the irregularly gridded input wavelength array waves will land in one bin versus another depending on the sampling. shape=(ngrid,)
- pypeit.core.coadd.update_errors(fluxes, ivars, masks, fluxes_stack, ivars_stack, masks_stack, sn_clip=30.0, title='', debug=False)[source]
Determine corrections to errors using the residuals of each exposure about a preliminary stack. This routine is used as part of the iterative masking/stacking loop to determine the corrections to the errors used to reject pixels for the next iteration of the stack. The routine returns a set of corrections for each of the exposures that is input.
- Parameters
fluxes (numpy.ndarray) – fluxes for each exposure on the native wavelength grids shape=(nspec, nexp)
ivars (numpy.ndarray) – Inverse variances for each exposure on the native wavelength grids shape=(nspec, nexp)
masks (numpy.ndarray) – Boolean masks for each exposure on the native wavelength grids. True=Good. shape=(nspec, nexp)
fluxes_stack (numpy.ndarray) – Stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)
ivars_stack (numpy.ndarray) – Inverse variances of stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)
masks_stack (numpy.ndarray) – Boolean mask of stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)
sn_clip (float, optional) – Errors are capped in output rejivars so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectra which neverthless differ at a level greater than the implied S/N due to systematics.
title (str, optional) – Title for QA plot
debug (bool, optional) – If True, show QA plots useful for debuggin.
- Returns
rejivars: numpy.ndarray, (nspec, nexp): Updated inverse variances to be used in rejection
sigma_corrs, numpy.ndarray, (nexp): Array of correction factors applied to the original ivars to get the new rejivars
outchi: numpy.ndarray, (nspec, nexp): The original chi=(fluxes-fluxes_stack)*np.sqrt(ivars) used to determine the correction factors. This quantity is useful for plotting. Note that the outchi is computed using the original non-corrected errors.
maskchi: numpy.ndarray, bool, (nspec, nexp): Mask returned by renormalize_erorrs indicating which pixels were used in the computation of the correction factors. This is basically the union of the input masks but with chi > clip (clip=6.0 is the default) values clipped out.
- Return type
- pypeit.core.coadd.weights_qa(waves, weights, masks, title='')[source]
Routine to make a QA plot for the weights used to compute a stacked spectrum.
- Parameters
wave (numpy.ndarray) – wavelength array for spectra that went into a stack; shape=(nspec, nexp)
weights (numpy.ndarray) – (S/N)^2 weights for the exposures that went into a stack. This would have been computed by sn_weights shape=(nspec, nexp)
mask (numpy.ndarray) – Boolean array indicating pixels which were masked in each individual exposure which go into the stack. shape=(nspec, nexp)
title (str, optional) – Title for the plot.