pypeit.core.coadd module

Coadding module.

pypeit.core.coadd.calc_snr(fluxes, ivars, gpms)[source]

Calculate the rms S/N of each input spectrum.

Parameters:
  • fluxes (list) – List of length nexp containing the numpy.ndarray 1d float spectra. The shapes in the list can be different. nexp = len(fluxes)

  • ivars (list) – List of length nexp containing the numpy.ndarray 1d float inverse variances of the spectra.

  • gpms (list) – List of length nexp containing the numpy.ndarray 1d float boolean good pixel masks of the spectra.

  • verbose (bool, optional) – Verbosity of print out.

Returns:

  • rms_sn (numpy.ndarray) – Array of shape (nexp,) of root-mean-square S/N value for each input spectra where nexp=len(fluxes).

  • sn_val (list) – List of length nexp containing the wavelength dependent S/N arrays for each input spectrum, i.e. each element contains the array flux*sqrt(ivar)

pypeit.core.coadd.coadd_iexp_qa(wave, flux, rejivar, mask, wave_stack, flux_stack, ivar_stack, mask_stack, outmask, norder=None, title='', qafile=None, show_telluric=False)[source]

Routine to creqate QA for showing the individual spectrum compared to the combined stacked spectrum indicating which pixels were rejected.

Parameters:
  • wave (numpy.ndarray) – Wavelength array for spectrum of the exposure in question. Shape is (nspec,).

  • flux (numpy.ndarray) – Flux for the exposure in question. Shape is (nspec,).

  • ivar (numpy.ndarray) – Inverse variance for the exposure in question. Shape is (nspec,).

  • mask (numpy.ndarray) – Boolean array with mask for the exposure in question True=Good. If not specified determined form inverse variance. Shape is (nspec,).

  • flux_stack (numpy.ndarray) – Stacked spectrum to be compared to the exposure in question. Shape is (nspec,).

  • ivar_stack (numpy.ndarray) – Inverse variance of the stacked spectrum. Shape is (nspec,).

  • mask_stack (numpy.ndarray) – Boolean array with mask for stacked spectrum. Shape is (nspec,).

  • norder (int, optional) – Indicate the number of orders if this is an echelle stack. If None, …

  • title (str, optional) – Plot title

  • qafile (str, optional) – QA file name

  • show_telluric (bool, optional) – Show the atmospheric absorption if wavelengths > 9000A are covered by the spectrum

pypeit.core.coadd.coadd_qa(wave, flux, ivar, nused, gpm=None, tell=None, title=None, qafile=None, show_telluric=False)[source]

Routine to make QA plot of the final stacked spectrum. It works for both longslit/mulitslit, coadded individual order spectrum of the Echelle data and the final coadd of the Echelle data.

Parameters:
  • wave (numpy.ndarray, shape=(nspec,)) – one-d wavelength array of your spectrum

  • flux (numpy.ndarray, shape=(nspec,)) – one-d flux array of your spectrum

  • ivar (numpy.ndarray, shape=(nspec,)) – one-d ivar array of your spectrum

  • nused (numpy.ndarray, shape=(nspec,)) – how many exposures used in the stack for each pixel, the same size with flux.

  • gpm (numpy.ndarray, optional, shape=(nspec,)) – boolean good pixel mask array for your spectrum. Good=True.

  • tell (numpy.ndarray, optional, shape=(nspec,)) – one-d telluric array for your spectrum

  • title (str, optional) – plot title

  • qafile (str, optional) – QA file name

  • show_telluric (bool, optional) – Show a telluric absorptoin model on top of the data if wavelengths cover > 9000A

pypeit.core.coadd.combspec(waves, fluxes, ivars, gpms, sn_smooth_npix, wave_method='linear', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, wave_grid_input=None, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, weight_method='auto', maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, title='', debug=False, debug_scale=False, show_scale=False, show=False, verbose=True)[source]

Main driver routine for coadding longslit/multi-slit spectra. NEED LOTS MORE HERE

Parameters:
  • waves (list) – List of length nexp containing numpy.ndarray float wavelength arrays for spectra to be stacked. These wavelength arrays can in general be different, irregularly spaced, and have different sizes.

  • fluxes (list) – List of length nexp containing numpy.ndarray float flux arrays for spectra to be stacked. These should be aligned with the wavelength arrays in waves.

  • ivars (list) – List of length nexp containing numpy.ndarray float ivar arrays for spectra to be stacked. These should be aligned with the wavelength arrays in waves.

  • gpms (list) – List of length nexp containing numpy.ndarray boolean good pixel mask arrays for spectra to be stacked. These should be aligned with the wavelength arrays in waves.

  • sn_smooth_npix (int) – Number of pixels to median filter by when computing S/N used to decide how to scale and weight spectra

  • wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Deafult is ‘linear’ which creates a uniformly space grid in lambda. See docuementation on get_wave_grid for description of the options.

  • dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.

  • dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.

  • spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.

  • wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.

  • wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.

  • ref_percentile (float, optional) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio

  • maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.

  • wave_grid_input (numpy.ndarray, optional) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)

  • maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.

  • sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.

  • scale_method (str, optional) – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is ‘auto’ will let the code determine this automitically which works well.

  • hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested

  • sn_min_polyscale (float, optional, default = 2.0) – maximum SNR for perforing median scaling

  • sn_min_medscale (float, optional, default = 0.5) – minimum SNR for perforing median scaling

  • (str) (weight_method) –

    Weight method to use for coadding spectra (see

    sn_weights()) for documentation. Default=’auto’

  • sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.

  • lower (float, optional, default=3.0) – lower rejection threshold for djs_reject

  • upper (float, optional, default=3.0) – upper rejection threshold for djs_reject

  • maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.

  • qafile (str, default=None) – Root name for QA, if None, it will be determined from the outfile

  • title (str, optional) – Title for QA plots

  • debug (bool, default=False) – Show all QA plots useful for debugging. Note there are lots of QA plots, so only set this to True if you want to inspect them all.

  • debug_scale (bool, default=False) – show interactive QA plots for the rescaling of the spectra

  • show (bool, default=False) – If True, show key QA plots or not

  • show_scale (bool, default=False) – If True, show interactive QA plots for the rescaling of the spectra

Returns:

  • wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)

  • wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)

  • flux_stack (numpy.ndarray) – Final stacked spectrum on wave_stack wavelength grid shape=(ngrid,)

  • ivar_stack (numpy.ndarray) – Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking. shape=(ngrid,)

  • mask_stack (numpy.ndarray) – Boolean mask for stacked spectrum on wave_stack wavelength grid. True=Good. shape=(ngrid,)

pypeit.core.coadd.compute_coadd2d(ref_trace_stack, sciimg_stack, sciivar_stack, skymodel_stack, inmask_stack, thismask_stack, waveimg_stack, wave_grid, spat_samp_fact=1.0, maskdef_dict=None, weights=None, interp_dspat=True)[source]

Construct a 2d co-add of a stack of PypeIt spec2d reduction outputs.

Slits are ‘rectified’ onto a spatial and spectral grid, which encompasses the spectral and spatial coverage of the image stacks. The rectification uses nearest grid point interpolation to avoid covariant errors. Dithering is supported as all images are centered relative to a set of reference traces in trace_stack.

Todo

These docs appear out-of-date

Parameters:
  • ref_trace_stack (list) –

    List of reference traces about which the images are rectified and coadded. If the images were not dithered then this reference trace can simply be the center of the slit:

    slitcen = (slit_left + slit_righ)/2
    

    If the images were dithered, then this object can either be the slitcen appropriately shifted with the dither pattern, or it could be the trace of the object of interest in each exposure determined by running PypeIt on the individual images. Shape is (nspec, nimgs).

  • sciimg_stack (list) – List of science images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • sciivar_stack (list) – List of inverse variance images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • skymodel_stack (list) – List of the model sky images, , each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • inmask_stack (list) – List of input good pixel masks (i.e. True values are good, False values are bad.), each of which is a boolean numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • waveimg_stack (list) – List of the wavelength images, , each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • weights (list, optional) – The weights used when combining the rectified images (see weighted_combine()). If weights is None a uniform weighting is used. If weights is not None then it must be a list of length nimgs. The individual elements of the list can either be floats, indiciating the weight to be used for each image, or arrays with shape = (nspec,) or shape = (nspec, nspat), indicating pixel weights for the individual images. Weights are broadast to the correct size of the image stacks (see broadcast_weights()), as necessary.

  • spat_samp_fact (float, optional) – Spatial sampling for 2d coadd spatial bins in pixels. A value > 1.0 (i.e. bigger pixels) will downsample the images spatially, whereas < 1.0 will oversample. Default = 1.0

  • loglam_grid (numpy.ndarray, optional) – Wavelength grid in log10(wave) onto which the image stacks will be rectified. The code will automatically choose the subset of this grid encompassing the wavelength coverage of the image stacks provided (see waveimg_stack). Either loglam_grid or wave_grid must be provided.

  • wave_grid (numpy.ndarray, optional) – Same as loglam_grid but in angstroms instead of log(angstroms). (TODO: Check units…)

  • maskdef_dict (dict, optional) – Dictionary containing all the maskdef info. The quantities saved are: maskdef_id, maskdef_objpos, maskdef_slitcen, maskdef_designtab. To learn what they are see SlitTraceSet datamodel.

  • interp_dspat (bool, optional) – Interpolate in the spatial coordinate image to faciliate running through core.extract.local_skysub_extract. This can be slow. Default=True.

Returns:

Returns a dict with the following keys:

  • wave_bins:

  • dspat_bins:

  • wave_mid:

  • wave_min:

  • wave_max:

  • dspat_mid:

  • sciimg: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded science image

  • sciivar: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded inverse variance image with correct error propagation

  • imgminsky: float ndarray shape = (nspec_coadd, nspat_coadd): Rectified and coadded sky subtracted image

  • outmask: bool ndarray shape = (nspec_coadd, nspat_coadd): Output mask for rectified and coadded images. True = Good, False=Bad.

  • nused: int ndarray shape = (nspec_coadd, nspat_coadd): Image of integers indicating the number of images from the image stack that contributed to each pixel

  • waveimg: float ndarray shape = (nspec_coadd, nspat_coadd): The averaged wavelength image corresponding to the rectified and coadded data.

  • dspat: float ndarray shape = (nspec_coadd, nspat_coadd): The average spatial offsets in pixels from the reference trace trace_stack corresponding to the rectified and coadded data.

  • nspec: int

  • nspat: int

  • maskdef_id: int

  • maskdef_slitcen: int

  • maskdef_objpos: int

  • maskdef_designtab: int

Return type:

dict

pypeit.core.coadd.compute_stack(wave_grid, waves, fluxes, ivars, gpms, weights, min_weight=1e-08)[source]

Compute a stacked spectrum from a set of exposures on the specified wave_grid with proper treatment of weights and masking. This code uses np.histogram to combine the data using NGP and does not perform any interpolations and thus does not correlate errors. It uses wave_grid to determine the set of wavelength bins that the data are averaged on. The final spectrum will be on an ouptut wavelength grid which is not the same as wave_grid. The ouput wavelength grid is the weighted average of the individual wavelengths used for each exposure that fell into a given wavelength bin in the input wave_grid. This 1d coadding routine thus maintains the independence of the errors for each pixel in the combined spectrum and computes the weighted averaged wavelengths of each pixel in an analogous way to the 2d extraction procedure which also never interpolates to avoid correlating erorrs.

Parameters:
  • wave_grid (numpy.ndarray) – new wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)

  • waves (list) – List of length nexp numpy.ndarray float wavelength arrays for spectra to be stacked. Note that the wavelength grids can in general be different for each exposure, irregularly spaced, and can have different sizes.

  • fluxes (list) – List of length nexp numpy.ndarray float flux arrays for spectra to be stacked. These are aligned to the wavelength grids in waves.

  • ivars (list) – List of length nexp numpy.ndarray float inverse variance arrays for spectra to be stacked. These are aligned to the wavelength grids in waves.

  • gpms (list) – List of length nexp numpy.ndarray boolean good pixel mask arrays for spectra to be stacked. These are aligned to the wavelength grids in waves. True=Good.

  • weights (list) – List of length nexp numpy.ndarray float weights to be used for combining the spectra. These are aligned to the wavelength grids in waves and were computed using sn_weights.

  • min_weight (float, optional) – Minimum allowed weight for any individual spectrum

Returns:

  • wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)

  • flux_stack (numpy.ndarray, shape=(ngrid,)) – Final stacked spectrum on wave_stack wavelength grid

  • ivar_stack (numpy.ndarray, shape=(ngrid,)) – Inverse variance spectrum on wave_stack wavelength grid. Errors are propagated according to weighting and masking.

  • gpm_stack (numpy.ndarray, shape=(ngrid,)) – Boolean Mask for stacked spectrum on wave_stack wavelength grid. True=Good.

  • nused (numpy.ndarray, shape=(ngrid,)) – Number of exposures which contributed to each pixel in the wave_stack. Note that this is in general different from nexp because of masking, but also becuse of the sampling specified by wave_grid. In other words, sometimes more spectral pixels in the irregularly gridded input wavelength array waves will land in one bin versus another depending on the sampling.

pypeit.core.coadd.ech_combspec(waves_arr_setup, fluxes_arr_setup, ivars_arr_setup, gpms_arr_setup, weights_sens_arr_setup, setup_ids=None, nbests=None, wave_method='log10', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, niter_order_scale=3, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, debug_scale=False, debug_order_stack=False, debug_global_stack=False, debug=False, show_order_stacks=False, show_order_scale=False, show_exp=False, show=False, verbose=False)[source]

Driver routine for coadding Echelle spectra. Calls combspec which is the main stacking algorithm. It will deliver three fits files: spec1d_order_XX.fits (stacked individual orders, one order per extension), spec1d_merge_XX.fits (straight combine of stacked individual orders), spec1d_stack_XX.fits (a giant stack of all exposures and all orders). In most cases, you should use spec1d_stack_XX.fits for your scientific analyses since it reject most outliers.

Parameters:
  • waves_arr_setup (list of numpy.ndarray) – List of wavelength arrays for spectra to be stacked. The length of the list is nsetups. Each element of the list corresponds to a distinct setup and each numpy array has shape=(nspec, norder, nexp)

  • fluxes_arr_setup (list of numpy.ndarray) – List of flux arrays for spectra to be stacked. The length of the list is nsetups. Each element of the list corresponds to a dinstinct setup and each numpy array has shape=(nspec, norder, nexp)

  • ivars_arr_setup (list of numpy.ndarray) – List of ivar arrays for spectra to be stacked. The length of the list is nsetups. Each element of the list corresponds to a dinstinct setup and each numpy array has shape=(nspec, norder, nexp)

  • gpms_arr_setup (list of numpy.ndarray) – List of good pixel mask arrays for spectra to be stacked. The length of the list is nsetups. Each element of the list corresponds to a dinstinct setup and each numpy array has shape=(nspec, norder, nexp)

  • weights_sens_arr_setup (list of numpy.ndarray) – List of sensitivity function weights required for relative weighting of the orders. The length of the list is nsetups. Each element of the list corresponds to a dinstinct setup and each numpy array has shape=(nspec, norder, nexp)

  • setup_ids (list, optional, default=None) – List of strings indicating the name of each setup. If None uppercase letters A, B, C, etc. will be used

  • nbests (int or list or numpy.ndarray, optional) – Integer or list of integers indicating the number of orders to use for estimating the per exposure weights per echelle setup. Default is nbests=None, which will just use one fourth of the orders for a given setup.

  • wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Default is ‘log10’ which creates a uniformly space grid in log10(lambda), which is typically the best for echelle spectrographs

  • dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.

  • dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.

  • dloglam (float, optional) – Dispersion in dimensionless units

  • spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.

  • wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.

  • wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.

  • wave_grid_input (numpy.ndarray, optional) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)

  • ref_percentile (float, optional) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio

  • maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.

  • sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.

  • hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested

  • sn_min_polyscale (float, optional, default = 2.0) – maximum SNR for perforing median scaling

  • sn_min_medscale (float, optional, default = 0.5) – minimum SNR for perforing median scaling

  • maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.

  • sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to

  • lower (float, optional, default=3.0) – lower rejection threshold for djs_reject

  • upper (float, optional, default=3.0) – upper rejection threshold for djs_reject

  • maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.

  • debug_order_stack (bool, default=False) – show interactive QA plots for the order stacking

  • debug_global_stack (bool, default=False) – show interactive QA plots for the global stacking of all the setups/orders/exposures

  • debug (bool, optional, default=False) – show all QA plots useful for debugging. Note there are lots of QA plots, so only set this to True if you want to inspect them all.

  • debug_scale (bool, optional, default=False) – show interactive QA plots for the rescaling of the spectra

  • show_order_scale (bool, optional, default=False) – show interactive QA plots for the order scaling

  • show (bool, optional, default=False,) – show coadded spectra QA plot or not

  • show_exp (bool, optional, default=False) – show individual exposure spectra QA plot or not

Returns:

  • wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)

  • a_tuple (tuple) –

    Contains:

    • wave_giant_stack: ndarray, (ngrid,): Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_giant_stack is NOT simply the wave_grid bin centers, since it computes the weighted average;

    • flux_giant_stack: ndarray, (ngrid,): Final stacked spectrum on wave_stack wavelength grid;

    • ivar_giant_stack: ndarray, (ngrid,): Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking.;

    • mask_giant_stack: ndarray, bool, (ngrid,): Mask for stacked spectrum on wave_stack wavelength grid. True=Good.

  • b_tuple (tuple) –

    Contains:

    • waves_stack_orders

    • fluxes_stack_orders

    • ivars_stack_orders

    • masks_stack_orders

pypeit.core.coadd.get_spat_bins(thismask_stack, trace_stack, spat_samp_fact=1.0)[source]

Determine the spatial bins for a 2d coadd and relative pixel coordinate images. This routine loops over all the images being coadded and creates an image of spatial pixel positions relative to the reference trace for each image in units of the desired rebinned spatial pixel sampling spat_samp_fact. The minimum and maximum relative pixel positions in this frame are then used to define a spatial position grid with whatever desired pixel spatial sampling.

Parameters:
  • thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • ref_trace_stack (list) –

    List of reference traces about which the images are rectified and coadded. If the images were not dithered then this reference trace can simply be the center of the slit:

    slitcen = (slit_left + slit_righ)/2
    

    If the images were dithered, then this object can either be the slitcen appropriately shifted with the dither pattern, or it could be the trace of the object of interest in each exposure determined by running PypeIt on the individual images. The list has nimgs elements, each of which is a 1D numpy.ndarray of shape (nspec,).

  • spat_samp_fact (float, optional) – Spatial sampling for 2d coadd spatial bins in pixels. A value > 1.0 (i.e. bigger pixels) will downsample the images spatially, whereas < 1.0 will oversample. Default = 1.0

Returns:

  • dspat_bins (numpy.ndarray) – shape (spat_max_int +1 - spat_min_int,) Array of spatial bins for rectifying the image.

  • dspat_stack (numpy.ndarray) – shape (nimgs, nspec, nspat) Image stack which has the spatial position of each exposure relative to the trace in the trace_stack for that image.

pypeit.core.coadd.get_wave_bins(thismask_stack, waveimg_stack, wave_grid)[source]

Utility routine to get the wavelength bins for 2d coadds from a mask

Parameters:
  • thismask_stack (list) – List of boolean arrays containing the masks indicating which pixels are on the slit in question. True values are on the slit; False values are off the slit. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • waveimg_stack (list) – List of the wavelength images, each of which is a float numpy.ndarray. Length of the list is nimgs. Shapes of the individual elements in the list are (nspec, nspat), but each image can have a different shape.

  • wave_grid (numpy.ndarray, shape=(ngrid,)) – The wavelength grid created for the 2d coadd

Returns:

wave_bins – Wavelength bins that are relevant given the illuminated pixels (thismask_stack) and wavelength coverage (waveimg_stack) of the image stack

Return type:

numpy.ndarray, shape = (ind_upper-ind_lower + 1, )

pypeit.core.coadd.get_wave_ind(wave_grid, wave_min, wave_max)[source]

Utility routine used by coadd2d to determine the starting and ending indices of a wavelength grid.

Parameters:
  • wave_grid (numpy.ndarray) – Wavelength grid.

  • wave_min (float) – Minimum wavelength covered by the data in question.

  • wave_max (float) – Maximum wavelength covered by the data in question.

Returns:

  • ind_lower (int) – Integer lower indice corresponding to wave_min

  • ind_upper (int) – Integer upper indice corresponding to wave_max

pypeit.core.coadd.get_ylim(flux, ivar, mask)[source]

Utility routine for setting the plot limits for QA plots.

Parameters:
Returns:

lower and upper limits for plotting.

Return type:

tuple

pypeit.core.coadd.interp_oned(wave_new, wave_old, flux_old, ivar_old, gpm_old, log10_blaze_function=None, sensfunc=False, kind='cubic')[source]

Interpolate a 1D spectrum onto a new wavelength grid.

Interpolation is done using scipy.interpolate.interp1d with cubic interpolation. Any wavelengths in wave_new that are beyond the range of wave_old are set to np.nan and masked via the output good-pixel mask.

Warning

Any wavelength in wave_old that is less than 1 is assumed to indicate that the wavelength is invalid!

Parameters:
  • wave_new (numpy.ndarray) – New wavelength grid for the output spectra. Must be 1D.

  • wave_old (numpy.ndarray) – Old wavelength grid. Must be 1D, need not have the same size as wave_new.

  • flux_old (numpy.ndarray) – Old flux on the wave_old grid. Shape must match wave_old.

  • ivar_old (numpy.ndarray) – Old ivar on the wave_old grid. Shape must match wave_old.

  • gpm_old (numpy.ndarray) – Old good-pixel mask (True=Good) on the wave_old grid. Shape must match wave_old.

  • log10_blaze_functionnumpy.ndarray or None, optional Log10 of the blaze function. Shape must match wave_old. Default=None.

  • sensfunc (bool, optional) – If True, the quantities flux*delta_wave and the corresponding ivar/delta_wave**2 will be interpolated and returned instead of flux and ivar. This is useful for sensitivity function computation where we need flux*(wavelength bin width). Beacause delta_wave is a difference of the wavelength grid, interpolating in the presence of masked data requires special care.

  • kind (int, str, optional) – Specifies the kind of interpolation as a string or as an integer specifying the order of the spline interpolator to use following the convention of scipy.interpolate.interp1d. The string has to be one of ‘linear’, ‘nearest’, ‘nearest-up’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, or ‘next’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point; ‘nearest-up’ and ‘nearest’ differ when interpolating half-integers (e.g. 0.5, 1.5) in that ‘nearest-up’ rounds up and ‘nearest’ rounds down. Default is ‘cubic’.

Returns:

Returns four objects flux_new, ivar_new, gpm_new, log10_blaze_new interpolated flux, inverse variance, and good-pixel mask arrays with the length matching the new wavelength grid. They are all numpy.ndarray objects with except if log10_blaze_function is None in which case log10_blaze_new is None

Return type:

tuple

pypeit.core.coadd.interp_spec(wave_new, waves, fluxes, ivars, gpms, log10_blaze_function=None, sensfunc=False, kind='cubic')[source]

Interpolate a set of spectra onto a new wavelength grid.

The method can perform two types of interpolation, depending on the shapes of the input arrays.

  1. If the new wavelength grid (wave_new) is 1D, all input spectra are interpolated to this new grid. The input spectra can be provided as either 1D or 2D arrays.

  2. If the new wavelength grid (wave_new) is 2D, all input spectra must be 1D. The single spectrum is then interpolated onto each of the new wavelength grids.

Parameters:
  • wave_new (numpy.ndarray, shape (nspec,) or (nspec, nimgs),) – New wavelength grid for output spectra. Shape can be 1D or 2D. See the method description for how this affects the code flow above.

  • waves (numpy.ndarray, shape (nspec,) or (nspec, nexp)) – Wavelength vector for current spectra. Shape can be 1D or 2D, where nexp, need not equal nimgs. See the method description for how this affects the code flow above.

  • fluxes (numpy.ndarray) – Flux vectors. Shape must match waves.

  • ivars (numpy.ndarray) – Inverse variance vectors. Shape must match waves.

  • gpms (numpy.ndarray) – Boolean good-pixel masks for each spectrum (True=Good). Shape must match waves.

  • sensfunc (bool, optional) – If True, the quantities flux*delta_wave and the corresponding ivar/delta_wave**2 will be interpolated and returned instead of flux and ivar. This is useful for sensitivity function computation where we need flux*(wavelength bin width). Beacause delta_wave is a difference of the wavelength grid, interpolating in the presence of masked data requires special care.

  • kind (str or int, optional) – Specifies the kind of interpolation as a string or as an integer specifying the order of the spline interpolator to use following the convention of scipy.interpolate.interp1d. The string has to be one of ‘linear’, ‘nearest’, ‘nearest-up’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, or ‘next’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point; ‘nearest-up’ and ‘nearest’ differ when interpolating half-integers (e.g. 0.5, 1.5) in that ‘nearest-up’ rounds up and ‘nearest’ rounds down. Default is ‘cubic’.

Returns:

  • fluxes_inter (numpy.ndarray) – interpolated flux with size and shape matching the new wavelength grid.

  • ivars_inter (numpy.ndarray) – interpolated inverse variance with size and shape matching the new wavelength grid.

  • gpms_inter (numpy.ndarray) – interpolated good-pixel mask with size and shape matching the new wavelength grid.

  • log10_blazes_inter (numpy.ndarray or None) – interpolated log10 blaze function with size and shape matching the new wavelength grid. If log10_blaze_function is not provided as an input, this will be None.

pypeit.core.coadd.median_filt_spec(flux, ivar, gpm, med_width)[source]

Utility routine to median filter a spectrum using the mask and propagating the errors using the utils.fast_running_median function.

Parameters:
  • flux (numpy.ndarray) – flux array with shape (nspec,)

  • ivar (numpy.ndarray) – inverse variance with shape (nspec,)

  • gpm (numpy.ndarray) – Boolean mask on the spectrum with shape (nspec,). True = good

  • med_width (float) – width for median filter in pixels

Returns:

pypeit.core.coadd.multi_combspec(waves, fluxes, ivars, masks, sn_smooth_npix=None, wave_method='linear', dwave=None, dv=None, dloglam=None, spec_samp_fact=1.0, wave_grid_min=None, wave_grid_max=None, ref_percentile=70.0, maxiter_scale=5, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, weight_method='auto', maxiter_reject=5, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, qafile=None, debug=False, debug_scale=False, show_scale=False, show=False)[source]

Routine for coadding longslit/multi-slit spectra. Calls combspec which is the main stacking algorithm.

Parameters:
  • waves (list) – List of numpy.ndarray wavelength arrays with shape (nspec_i,) with the wavelength arrays of the spectra to be coadded.

  • fluxes (list) – List of numpy.ndarray wavelength arrays with shape (nspec_i,) with the flux arrays of the spectra to be coadded.

  • ivars (list) – List of numpy.ndarray wavelength arrays with shape (nspec_i,) with the ivar arrays of the spectra to be coadded.

  • masks (list) – List of numpy.ndarray wavelength arrays with shape (nspec_i,) with the mask arrays of the spectra to be coadded.

  • sn_smooth_npix (int, optional) – Number of pixels to median filter by when computing S/N used to decide how to scale and weight spectra. If set to None, the code will determine the effective number of good pixels per spectrum in the stack that is being co-added and use 10% of this neff.

  • wave_method (str, optional) – method for generating new wavelength grid with get_wave_grid. Deafult is ‘linear’ which creates a uniformly space grid in lambda. See docuementation on get_wave_grid for description of the options.

  • dwave (float, optional) – dispersion in units of A in case you want to specify it for get_wave_grid, otherwise the code computes the median spacing from the data.

  • dv (float, optional) – Dispersion in units of km/s in case you want to specify it in the get_wave_grid (for the ‘velocity’ option), otherwise a median value is computed from the data.

  • spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.

  • wave_grid_min (float, optional) – In case you want to specify the minimum wavelength in your wavelength grid, default=None computes from data.

  • wave_grid_max (float, optional) – In case you want to specify the maximum wavelength in your wavelength grid, default=None computes from data.

  • wave_grid_input (numpy.ndarray) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)

  • maxiter_reject (int, optional) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached. Default=5.

  • ref_percentile (float, optional) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio. Should be a number between 0 and 100, default = 70.0

  • maxiter_scale (int, optional) – Maximum number of iterations performed for rescaling spectra. Default=5.

  • sigrej_scale (float, optional) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec. Default=3.0

  • scale_method (str, optional) – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default=’auto’ will let the code determine this automitically which works well.

  • hand_scale (numpy.ndarray, optional) – Array of hand scale factors, not well tested

  • sn_min_polyscale (float, optional) – maximum SNR for perforing median scaling

  • sn_min_medscale (float, optional) – minimum SNR for perforing median scaling

  • (str) (weight_method) –

    Weight method to use for coadding spectra (see

    sn_weights()) for documentation. Default=’auto’

  • maxiter_reject – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.

  • sn_clip (float, optional) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.

  • lower (float, optional) – lower rejection threshold for djs_reject

  • upper (float, optional) – upper rejection threshold for djs_reject

  • maxrej (int, optional) – maximum number of pixels to reject in each iteration for djs_reject.

  • nmaskedge (int, optional) – Number of edge pixels to mask. This should be removed/fixed.

  • qafile (str, optional, default=None) – Root name for QA, if None, it will be determined from the outfile

  • outfile (str, optional, default=None,) – Root name for QA, if None, it will come from the target name from the fits header.

  • debug (bool, optional, default=False,) – Show all QA plots useful for debugging. Note there are lots of QA plots, so only set this to True if you want to inspect them all.

  • debug_scale (bool, optional, default=False) – show interactive QA plots for the rescaling of the spectra

  • show (bool, optional, default=False) – Show key QA plots or not

Returns:

  • wave_grid_mid (numpy.ndarray, (ngrid,)) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more.

  • wave_stack (numpy.ndarray, (ngrid,)) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average.

  • flux_stack (numpy.ndarray, (ngrid,)) – Final stacked spectrum on wave_stack wavelength grid

  • ivar_stack (numpy.ndarray, (ngrid,)) – Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking.

  • mask_stack (numpy.ndarray, bool, (ngrid,)) – Mask for stacked spectrum on wave_stack wavelength grid. True=Good.

pypeit.core.coadd.poly_model_eval(theta, func, model, wave, wave_min, wave_max)[source]

Routine to evaluate the polynomial fit

Parameters:
  • theta (numpy.ndarray) – coefficient parameter vector of type=float

  • func (str) – polynomial type

  • model (str) – model type, valid model types are ‘poly’, ‘square’, or ‘exp’, corresponding to normal polynomial, squared polynomial, or exponentiated polynomial

  • wave (numpy.ndarray) – array of wavelength values of type=float

  • wave_min (float) – minimum wavelength for polynomial fit range

  • wave_max (float) – maximum wavelength for polynomial fit range

Returns:

Array of evaluated polynomial with same shape as wave

Return type:

numpy.ndarray

pypeit.core.coadd.poly_ratio_fitfunc(flux_ref, gpm, arg_dict, init_from_last=None, **kwargs_opt)[source]

Function to be optimized by robust_optimize for solve_poly_ratio polynomial rescaling of one spectrum to match a reference spectrum. This function has the correct format for running robust_optimize optimization. In addition to running the optimization, this function recomputes the error vector ivartot for the error rejection that takes place at each iteration of the robust_optimize optimization. The ivartot is also renormalized using the renormalize_errors function enabling rejection. A scale factor is multiplied into the true errors to allow one to reject based on the statistics of the actual error distribution.

Args:
flux_ref (numpy.ndarray):

Reference flux that we are trying to rescale our spectrum to match

gpm (numpy.ndarray):

Boolean array with mask for the current iteration of the optimization. True=good

arg_dict (dict):

dictionary containing arguments for the optimizing function. See poly_ratio_fitfunc_chi2 for how arguments are used. They are mask, flux_med, flux_ref_med, ivar_ref_med, wave, wave_min, wave_max, func

init_from_last (obj, optional):

Use this scipy optimization object from a previous iteration as the guess

kwargs_opt (dict):

arguments to be passed to the optimizer, which in this case is just vanilla scipy.minimize with the default optimizer

Returns:

Three objects are returned. (1) scipy optimization object, (2) scale factor to be applied to the data to match the reference spectrum flux_ref, (3) error vector to be used for the rejection that takes place at each iteration of the robust_optimize optimization

Return type:

tuple

pypeit.core.coadd.poly_ratio_fitfunc_chi2(theta, gpm, arg_dict)[source]

Function for computing the chi^2 loss function for solving for the polynomial rescaling of one spectrum to another. There are two non-standard things implemented here which increase ther robustness. The first is a non-standard error used for the chi, which adds robustness and increases the stability of the optimization. This was taken from the idlutils solve_poly_ratio code. The second thing is that the chi is remapped using the scipy huber loss function to reduce sensitivity to outliers, based on the scipy cookbook on robust optimization.

Parameters:
  • theta (numpy.ndarray) – parameter vector for the polymomial fit

  • gpm (numpy.ndarray) – boolean mask for the current iteration of the optimization, True=good

  • arg_dict (dict) – dictionary containing arguments

Returns:

this is effectively the chi^2, i.e. the quantity to be minimized by the optimizer. Note that this is not formally the chi^2 since the huber loss function re-maps the chi to be less sensitive to outliers.

Return type:

float

pypeit.core.coadd.rebin2d(spec_bins, spat_bins, waveimg_stack, spatimg_stack, thismask_stack, inmask_stack, sci_list, var_list)[source]

Rebin a set of images and propagate variance onto a new spectral and spatial grid. This routine effectively “recitifies” images using np.histogram2d which is extremely fast and effectively performs nearest grid point interpolation.

Parameters:
  • spec_bins (numpy.ndarray, float, shape = (nspec_rebin)) – Spectral bins to rebin to.

  • spat_bins (numpy.ndarray, float ndarray, shape = (nspat_rebin)) – Spatial bins to rebin to.

  • waveimg_stack (numpy.ndarray, float , shape = (nimgs, nspec, nspat)) – Stack of nimgs wavelength images with shape = (nspec, nspat) each

  • spatimg_stack (numpy.ndarray, float, shape = (nimgs, nspec, nspat)) – Stack of nimgs spatial position images with shape = (nspec, nspat) each

  • thismask_stack (numpy.ndarray, bool, shape = (nimgs, nspec, nspat)) – Stack of nimgs images with shape = (nspec, nspat) indicating the locatons on the pixels on an image that are on the slit in question.

  • inmask_stack (numpy.ndarray, bool ndarray, shape = (nimgs, nspec, nspat)) – Stack of nimgs images with shape = (nspec, nspat) indicating which pixels on an image are masked. True = Good, False = Bad

  • sci_list (list) – Nested list of images, i.e. list of lists of images, where sci_list[i][j] is a shape = (nspec, nspat) where the shape can be different for each image. The ith index is the image type, i.e. sciimg, skysub, tilts, waveimg, the jth index is the exposure or image number, i.e. nimgs. These images are to be rebinned onto the commong grid.

  • var_list (list) – Nested list of variance images, i.e. list of lists of images. The format is the same as for sci_list, but note that sci_list and var_list can have different lengths. Since this routine performs a NGP rebinning, it effectively comptues the average of a science image landing on a pixel. This means that the science is weighted by the 1/norm_rebin_stack, and hence variances must be weighted by that factor squared, which his why they must be input here as a separate list.

Returns:

  • sci_list_out (list) – The list of ndarray rebinned images with new shape (nimgs, nspec_rebin, nspat_rebin)

  • var_list_out (list) – The list of ndarray rebinned variance images with correct error propagation with shape (nimgs, nspec_rebin, nspat_rebin).

  • norm_rebin_stack (int ndarray, shape (nimgs, nspec_rebin, nspat_rebin)) – An image stack indicating the integer occupation number of a given pixel. In other words, this number would be zero for empty bins, one for bins that were populated by a single pixel, etc. This image takes the input inmask_stack into account. The output mask for each image can be formed via outmask_rebin_stack = (norm_rebin_stack > 0).

  • nsmp_rebin_stack (int ndarray, shape (nimgs, nspec_rebin, nspat_rebin)) – An image stack indicating the integer occupation number of a given pixel taking only the thismask_stack into account, but taking the inmask_stack into account. This image is mainly constructed for bookeeping purposes, as it represents the number of times each pixel in the rebin image was populated taking only the “geometry” of the rebinning into account (i.e. the thismask_stack), but not the masking (inmask_stack).

pypeit.core.coadd.renormalize_errors(chi, mask, clip=6.0, max_corr=5.0, title='', debug=False)[source]

Function for renormalizing errors. The distribution of input chi (defined by chi = (data - model)/sigma) values is analyzed, and a correction factor to the standard deviation sigma_corr is returned. This should be multiplied into the errors. In this way, a rejection threshold of i.e. 3-sigma, will always correspond to roughly the same percentile. This renormalization guarantees that rejection is not too agressive in cases where the empirical errors determined from the chi-distribution differ significantly from the noise model which was used to determine chi.

Parameters:
  • chi (numpy.ndarray) – input chi values

  • mask (numpy.ndarray) – True = good, mask for your chi array of type bool

  • mask – True = good, mask for your chi array of type bool

  • clip (float, optional) – threshold for outliers which will be clipped for the purpose of computing the renormalization factor

  • max_corr (float, optional) – maximum corrected sigma allowed.

  • title (str, optional) – title for QA plot, passed to renormalize_errors_qa

  • debug (bool, optional) – If True, show the QA plot created by renormalize_errors_qa

Returns:

(1) sigma_corr (float), corrected new sigma; (2) maskchi (numpy.ndarray, bool): new mask (True=good) which indicates the values used to compute the correction (i.e it includes clipping)

Return type:

tuple

pypeit.core.coadd.renormalize_errors_qa(chi, maskchi, sigma_corr, sig_range=6.0, title: str = '', qafile: str = None)[source]

Generate a histogram QA plot of the input chi distribution.

Parameters:
  • chi (numpy.ndarray) – your chi values

  • maskchi (numpy.ndarray) – True = good, mask for your chi array of type bool

  • sigma_corr (float) – corrected sigma

  • sig_range (float) – used to set binsize, default +- 6-sigma

  • title (str, optional) – plot title

  • qafile (str, optional) – Write figure to this output QA file, if provided

pypeit.core.coadd.robust_median_ratio(flux, ivar, flux_ref, ivar_ref, mask=None, mask_ref=None, ref_percentile=70.0, min_good=0.05, maxiters=5, sigrej=3.0, max_factor=10.0, snr_do_not_rescale=1.0, verbose=False)[source]

Robustly determine the ratio between input spectrum flux and reference spectrum flux_ref. The code will perform best if the reference spectrum is chosen to be the higher S/N ratio spectrum, i.e. a preliminary stack that you want to scale each exposure to match. Note that the flux and flux_ref need to be on the same wavelength grid!!

Parameters:
  • flux (numpy.ndarray) – spectrum that will be rescaled. shape=(nspec,)

  • ivar (numpy.ndarray) – inverse variance for the spectrum that will be rescaled. Same shape as flux

  • flux_ref (numpy.ndarray) – reference spectrum. Same shape as flux

  • mask (numpy.ndarray, optional) – boolean mask for the spectrum that will be rescaled. True=Good. If not input, computed from inverse variance

  • ivar_ref (numpy.ndarray, optional) – inverse variance of reference spectrum.

  • mask_ref (numpy.ndarray, optional) – Boolean mask for reference spectrum. True=Good. If not input, computed from inverse variance.

  • ref_percentile (float, optional, default=70.0) – Percentile fraction used for selecting the minimum SNR cut from the reference spectrum. Pixels above this percentile cut are deemed the “good” pixels and are used to compute the ratio. This must be a number between 0 and 100.

  • min_good (float, optional, default = 0.05) – Minimum fraction of good pixels determined as a fraction of the total pixels for estimating the median ratio

  • maxiters (int, optional, default = 5) – Maximum number of iterations for astropy.stats.SigmaClip

  • sigrej (float, optional, default = 3.0) – Rejection threshold for astropy.stats.SigmaClip

  • max_factor (float, optional, default = 10.0,) – Maximum allowed value of the returned ratio

  • snr_do_not_rescale (float, optional, default = 1.0) – If the S/N ratio of the set of pixels (defined by upper ref_percentile in the reference spectrum) in the input spectrum have a median value below snr_do_not_rescale, median rescaling will not be attempted and the code returns ratio = 1.0. We also use this parameter to define the set of pixels (determined from the reference spectrum) to compare for the rescaling.

Returns:

ratio – the number that must be multiplied into flux in order to get it to match up with flux_ref

Return type:

float

pypeit.core.coadd.scale_spec(wave, flux, ivar, sn, wave_ref, flux_ref, ivar_ref, mask=None, mask_ref=None, scale_method='auto', min_good=0.05, ref_percentile=70.0, maxiters=5, sigrej=3, max_median_factor=10.0, npoly=None, hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, debug=False, show=False)[source]

Routine for solving for the best way to rescale an input spectrum flux to match a reference spectrum flux_ref. The code will work best if you choose the reference to be the highest S/N ratio spectrum. If the scale_method is not specified, the code will make a decision about which method to use based on the input S/N ratio.

Parameters:
  • wave (numpy.ndarray) – wavelengths grid for the spectra of shape (nspec,)

  • flux (numpy.ndarray) – spectrum that will be rescaled.

  • ivar (numpy.ndarray) – inverse variance for the spectrum that will be rescaled.

  • sn (float) – S/N of the spectrum that is being scaled used to make decisions about the scaling method. This can be computed by sn_weights and passed in.

  • flux_ref (numpy.ndarray, (nspec,)) – reference spectrum.

  • ivar_ref (numpy.ndarray, (nspec,)) – inverse variance of reference spectrum.

  • mask (numpy.ndarray) – Boolean mask for the spectrum that will be rescaled. True=Good. If not input, computed from inverse variance

  • mask_ref (numpy.ndarray) – Boolean mask for reference spectrum. True=Good. If not input, computed from inverse variance.

  • min_good (float, optional, default = 0.05) – minimum fraction of the total number of good pixels needed for estimate the median ratio

  • maxiters (int, optional) – maximum number of iterations for rejecting outliers used by the robust_median_ratio routine if median rescaling is the method used.

  • max_median_factor (float, optional, default=10.0) – maximum scale factor for median rescaling for robust_median_ratio if median rescaling is the method used.

  • sigrej (float, optional, default=3.0) – rejection threshold used for rejecting outliers by robsut_median_ratio

  • ref_percentile (float, optional, default=70.0) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio

  • npoly (int, optional, default=None) – order for the poly ratio scaling if polynomial rescaling is the method used. Default is to automatically compute this based on S/N ratio of data.

  • scale_method (str, optional) – scale method, str, default=’auto’. Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is to let the code determine this automitically which works well.

  • hand_scale (numpy.ndarray, optional) – array of hand scale factors, not well tested. shape=(nexp,)

  • sn_min_polyscale (float, optional, default=2.0) – maximum SNR for perforing median scaling

  • sn_min_medscale (float, optional, default=0.5) – minimum SNR for perforing median scaling

  • debug (bool, optional, default=False) – show interactive QA plot

Returns:

  • flux_scale (numpy.ndarray, shape is (nspec,)) – Scaled spectrum

  • ivar_scale (numpy.ndarray, shape is (nspec,)) – Inverse variance of scaled spectrum

  • scale (numpy.ndarray, shape is (nspec,)) – Scale factor applied to the spectrum and inverse variance

  • method_used (str) – Method that was used to scale the spectra.

pypeit.core.coadd.scale_spec_qa(wave, flux, ivar, wave_ref, flux_ref, ivar_ref, ymult, scale_method, mask=None, mask_ref=None, ylim=None, title='')[source]

QA plot for spectrum scaling.

Parameters:
  • wave (numpy.ndarray) – wavelength array for spectrum to be scaled and reference spectrum. shape=(nspec,)

  • flux (numpy.ndarray) – flux for spectrum to be scaled; shape=(nspec,)

  • ivar (numpy.ndarray) – inverse variance for spectrum to be scaled. shape=(nspec,)

  • wave_ref (numpy.ndarray) – reference wavelengths; shape=(nspec,)

  • flux_ref (numpy.ndarray) – reference flux; shape=(nspec,)

  • ivar_ref (numpy.ndarray) – inverse variance of reference flux; shape=(nspec,)

  • ymult (numpy.ndarray) – scale factor array; shape=(nspec,)

  • scale_method (str) – label of method used for rescaling which will be shown on QA plot.

  • mask (numpy.ndarray, optional) – Boolean mask for spectrum to be scaled. True=Good. If not specified determined form inverse variance shape=(nspec,)

  • mask_ref (numpy.ndarray, optional) – Boolean mask for reference flux. True=Good. shape=(nspec,)

  • ylim (tuple, optional) – tuple for limits of the QA plot. If None, will be determined automtically with get_ylim

  • title (str, optional) – QA plot title

pypeit.core.coadd.scale_spec_stack(wave_grid, wave_grid_mid, waves, fluxes, ivars, gpms, sns, weights, ref_percentile=70.0, maxiter_scale=5, sigrej_scale=3.0, scale_method='auto', hand_scale=None, sn_min_polyscale=2.0, sn_min_medscale=0.5, debug=False, show=False)[source]

Scales a set of spectra to a common flux scale. This is done by first computing a stack of the spectra and then scaling each spectrum to match the composite with the scale_spec() algorithm which performs increasingly sophisticated scaling depending on the S/N ratio.

Parameters:
  • wave_grid (numpy.ndarray) – New wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)

  • wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)

  • waves (list) – List of length nexp float numpy.ndarray of wavelengths of the spectra to be stacked. Note that wavelength arrays can in general be different, irregularly spaced, and have different sizes.

  • fluxes (list) – List of length nexp float numpy.ndarray of fluxes for each exposure aligned with the wavelength grids in waves.

  • ivars (list) – List of length nexp float numpy.ndarray of inverse variances for each exposure aligned with the wavelength grids in waves.

  • gpms (list) – List of length nexp bool numpy.ndarray of good pixel masks for each exposure aligned with the wavelength grids in waves. True=Good.

  • sns (numpy.ndarray) – sn of each spectrum in the list of exposures used to determine which scaling method should be used. This can be computed using sn_weights. shape=(nexp,)

  • sigrej_scale (float, optional, default=3.0) – Rejection threshold used for rejecting pixels when rescaling spectra with scale_spec.

  • ref_percentile (float, optional, default=70.0) – percentile fraction cut used for selecting minimum SNR cut for robust_median_ratio

  • maxiter_scale (int, optional, default=5) – Maximum number of iterations performed for rescaling spectra.

  • scale_method (str, optional, default='auto') – Options are auto, poly, median, none, or hand. Hand is not well tested. User can optionally specify the rescaling method. Default is to let the code determine this automitically which works well.

  • hand_scale (numpy.ndarray, optional) – array of hand scale factors, not well tested

  • sn_min_polyscale (float, optional, default=2.0) – maximum SNR for perforing median scaling

  • sn_min_medscale (float, optional, default=0.5) – minimum SNR for perforing median scaling

  • debug (bool, optional, default=False) – show interactive QA plot

Returns:

  • fluxes_scales (list) – List of length nexp numpy.ndarray rescaled fluxes

  • ivars_scales (list) – List of length nexp numpy.ndarray rescaled ivars. shape=(nspec, nexp)

  • scales (list) – List of length nexp numpy.ndarray scale factors applied to each individual spectra and their inverse variances.

  • scale_method_used (list) – List of methods used for rescaling spectra.

pypeit.core.coadd.smooth_weights(inarr, gdmsk, sn_smooth_npix)[source]

Smooth the input weights with a Gaussian 1D kernel.

Parameters:
  • inarr (numpy.ndarray) – S/N spectrum to be smoothed. shape = (nspec,)

  • gdmsk (numpy.ndarray) – Boolean mask of good pixels. shape = (nspec,)

  • sn_smooth_npix (float) – Number of pixels used for determining smoothly varying S/N ratio weights. The sigma of the kernel is set by sig_res = max(sn_smooth_npix / 10.0, 3.0)

Returns:

smoothed version of inarr.

Return type:

numpy.ndarray

pypeit.core.coadd.sn_weights(fluxes, ivars, gpms, sn_smooth_npix=None, weight_method='auto', verbose=False)[source]

Calculate the S/N of each input spectrum and create an array of (S/N)^2 weights to be used for coadding.

Parameters:
  • fluxes (list) – List of len(nexp) containing the numpy.ndarray 1d float spectra. The shapes in the list can be different.

  • ivars (list) – List of len(nexp) containing the numpy.ndarray 1d float inverse variances of the spectra.

  • gpms (list) – List of len(nexp) containing the numpy.ndarray 1d float boolean good pixel masks of the spectra.

  • sn_smooth_npix (float, optional) – Number of pixels used for determining smoothly varying S/N ratio weights. This must be passed for all weight methods excpet for weight_method=’constant’ or ‘uniform’, since then wavelength dependent weights are not used.

  • weight_method (str, optional) –

    The weighting method to be used. Options are 'auto', 'constant', 'uniform', 'wave_dependent', 'relative', or 'ivar'. The default is 'auto'. Behavior is as follows:

    • 'auto': Use constant weights if rms_sn < 3.0, otherwise use wavelength dependent.

    • 'constant': Constant weights based on rms_sn**2

    • 'uniform': Uniform weighting.

    • 'wave_dependent': Wavelength dependent weights will be used irrespective of the rms_sn ratio. This option will not work well at low S/N ratio although it is useful for objects where only a small fraction of the spectral coverage has high S/N ratio (like high-z quasars).

    • 'relative': Calculate weights by fitting to the ratio of spectra? Note, relative weighting will only work well when there is at least one spectrum with a reasonable S/N, and a continuum. RJC note - This argument may only be better when the object being used has a strong continuum + emission lines. The reference spectrum is assigned a value of 1 for all wavelengths, and the weights of all other spectra will be determined relative to the reference spectrum. This is particularly useful if you are dealing with highly variable spectra (e.g. emission lines) and require a precision better than ~1 per cent.

    • 'ivar': Use inverse variance weighting. This is not well tested and should probably be deprecated.

  • verbose (bool, optional) – Verbosity of print out.

Returns:

  • rms_sn (numpy.ndarray) – Array of root-mean-square S/N value for each input spectra. Shape = (nexp,)

  • weights (list) – List of len(nexp) containing the signal-to-noise squared weights to be applied to the spectra. This output is aligned with the vector (or vectors) provided in waves, i.e. it is a list of arrays of type numpy.ndarray with the same shape as those in waves.

pypeit.core.coadd.solve_poly_ratio(wave, flux, ivar, flux_ref, ivar_ref, norder, mask=None, mask_ref=None, scale_min=0.05, scale_max=100.0, func='legendre', model='square', maxiter=3, sticky=True, lower=3.0, upper=3.0, median_frac=0.01, ref_percentile=70.0, debug=False)[source]

Routine for solving for the polynomial rescaling of an input spectrum flux to match a reference spectrum flux_ref. The two spectra need to be defined on the same wavelength grid. The code will work best if you choose the reference to be the higher S/N ratio spectrum. Note that the code multiplies in the square of a polnomial of order norder to ensure positivity of the scale factor. It also operates on median filtered spectra to be more robust against outliers

Parameters:
  • wave (numpy.ndarray) – wavelength array of shape (nspec,). flux, ivar, flux_ref, and ivar_ref must all be on the same wavelength grid

  • flux (numpy.ndarray) – flux that you want to rescale to match flux_ref

  • ivar (numpy.ndarray) – inverse varaiance of the array that you want to rescale to match flux_ref

  • flux_ref (numpy.ndarray) – reference flux that you want to rescale flux to match.

  • ivar_ref (numpy.ndarray) – inverse variance for reference flux

  • norder (int) – Order of polynomial rescaling; norder=1 is a linear fit and norder must be >= 1 otherwise the code will fault.

  • mask (numpy.ndarray, optional) – boolean mask for spectrum that you want to rescale, True=Good

  • mask_ref (numpy.ndarray, optional) – boolean mask for reference flux

  • scale_min (float, optional) – minimum scaling factor allowed. default =0.05

  • scale_max (float, optional) – maximum scaling factor allowed. default=100.0

  • func (str, optional) – function you want to use. default=’legendre’

  • model (str, optional) – model type, valid model types are ‘poly’, ‘square’, or ‘exp’, corresponding to normal polynomial, squared polynomial, or exponentiated polynomial. default = ‘square’

  • maxiter (int, optional) – maximum number of iterations for robust_optimize. default=3

  • sticky (bool, optional) – whether you want the rejection to be sticky or not with robust_optimize. See docs for djs_reject for definition of sticky. default=True

  • lower (float, optional) – lower sigrej rejection threshold for robust_optimize. default=3.0

  • upper (float, optional) – upper sigrej rejection threshold for robust_optimize. default=3.0

  • median_frac (float, optional) – the code rescales median filtered spectra with ‘reflect’ boundary conditions. The with of the median filter will be median_frac*nspec, where nspec is the number of spectral pixels. default = 0.01,

  • debug (bool, optional) – If True, show interactive QA plot. default=False

Returns:

  • ymult (numpy.ndarray, (nspec,)) – rescaling factor to be multiplied into flux to match flux_ref.

  • fit_tuple (tuple) – Tuple with the polynomial coefficients, the minimum wavelength coordinate and maximum wavelength coordinate used in the fit.

  • flux_rescale (numpy.ndarray, (nspec,)) – rescaled flux, i.e. ymult multiplied into flux.

  • ivar_rescale (numpy.ndarray, (nspec,)) – rescaled inverse variance

  • outmask (numpy.ndarray, bool, (nspec,)) – output mask determined from the robust_optimize optimization/rejection iterations. True=Good

pypeit.core.coadd.spec_reject_comb(wave_grid, wave_grid_mid, waves_list, fluxes_list, ivars_list, gpms_list, weights_list, sn_clip=30.0, lower=3.0, upper=3.0, maxrej=None, maxiter_reject=5, title='', debug=False, verbose=False)[source]

Routine for executing the iterative combine and rejection of a set of spectra to compute a final stacked spectrum.

Parameters:
  • wave_grid (numpy.ndarray) – new wavelength grid desired. This will typically be a reguarly spaced grid created by the get_wave_grid routine. The reason for the ngrid+1 is that this is the general way to specify a set of bins if you desire ngrid bin centers, i.e. the output stacked spectra have ngrid elements. The spacing of this grid can be regular in lambda (better for multislit) or log lambda (better for echelle). This new wavelength grid should be designed with the sampling of the data in mind. For example, the code will work fine if you choose the sampling to be too fine, but then the number of exposures contributing to any given wavelength bin will be one or zero in the limiting case of very small wavelength bins. For larger wavelength bins, the number of exposures contributing to a given bin will be larger. shape=(ngrid +1,)

  • wave_grid_mid (numpy.ndarray) – Wavelength grid (in Angstrom) evaluated at the bin centers, uniformly-spaced either in lambda or log10-lambda/velocity. See core.wavecal.wvutils.py for more. shape=(ngrid,)

  • waves (list) – List of length nexp float numpy.ndarray wavelength arrays for spectra to be stacked. Note that the wavelength grids can in general be different for each exposure, irregularly spaced, and have different sizes.

  • fluxes (list) – List of length nexp float numpy.ndarray fluxes for each exposure aligned with the wavelength arrays in waves

  • ivars (list) – List of length nexp float numpy.ndarray inverse variances for each exposure aligned with the wavelength arrays in waves

  • gpms (list) – List of length nexp boolean numpy.ndarray good pixel masks for each exposure aligned with the wavelength arrays in waves. True=Good.

  • weights (list) – List of length nexp float numpy.ndarray weights for each exposure aligned with the wavelength arrays in waves. These are computed using sn_weights

  • sn_clip (float, optional, default=30.0) – Errors are capped during rejection so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectrum which neverthless differ at a level greater than the implied S/N due to systematics.

  • lower (float, optional, default=3.0) – lower rejection threshold for djs_reject

  • upper (float, optional, default=3.0) – upper rejection threshold for djs_reject

  • maxrej (int, optional, default=None) – maximum number of pixels to reject in each iteration for djs_reject.

  • maxiter_reject (int, optional, default=5) – maximum number of iterations for stacking and rejection. The code stops iterating either when the output mask does not change betweeen successive iterations or when maxiter_reject is reached.

  • title (str, optional) – Title for QA plot

  • debug (bool, optional, default=False) – Show QA plots useful for debugging.

  • verbose (bool, optional, default=False) – Level of verbosity.

Returns:

  • wave_stack (numpy.ndarray) – Wavelength grid for stacked spectrum. As discussed above, this is the weighted average of the wavelengths of each spectrum that contriuted to a bin in the input wave_grid wavelength grid. It thus has ngrid elements, whereas wave_grid has ngrid+1 elements to specify the ngrid total number of bins. Note that wave_stack is NOT simply the wave_grid bin centers, since it computes the weighted average. shape=(ngrid,)

  • flux_stack (numpy.ndarray) – Final stacked spectrum on wave_stack wavelength grid shape=(ngrid,)

  • ivar_stack (numpy.ndarray) – Inverse variance spectrum on wave_stack wavelength grid. Erors are propagated according to weighting and masking. shape=(ngrid,)

  • gpm_stack (numpy.ndarray) – Boolean mask for stacked spectrum on wave_stack wavelength grid. True=Good. shape=(ngrid,)

  • nused (numpy.ndarray) – Number of exposures which contributed to each pixel in the wave_stack. Note that this is in general different from nexp because of masking, but also becuse of the sampling specified by wave_grid. In other words, sometimes more spectral pixels in the irregularly gridded input wavelength array waves will land in one bin versus another depending on the sampling. shape=(ngrid,)

  • out_gpms (list) – List of length nexp output numpy.ndarray good pixel masks for each exposure aligned with the wavelength arrays in waves indicating which pixels are rejected in each exposure of the original input spectra after performing all of the iterations of combine/rejection

pypeit.core.coadd.update_errors(fluxes, ivars, masks, fluxes_stack, ivars_stack, masks_stack, sn_clip=30.0, title='', debug=False)[source]

Determine corrections to errors using the residuals of each exposure about a preliminary stack. This routine is used as part of the iterative masking/stacking loop to determine the corrections to the errors used to reject pixels for the next iteration of the stack. The routine returns a set of corrections for each of the exposures that is input.

Parameters:
  • fluxes (numpy.ndarray) – fluxes for each exposure on the native wavelength grids shape=(nspec, nexp)

  • ivars (numpy.ndarray) – Inverse variances for each exposure on the native wavelength grids shape=(nspec, nexp)

  • masks (numpy.ndarray) – Boolean masks for each exposure on the native wavelength grids. True=Good. shape=(nspec, nexp)

  • fluxes_stack (numpy.ndarray) – Stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)

  • ivars_stack (numpy.ndarray) – Inverse variances of stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)

  • masks_stack (numpy.ndarray) – Boolean mask of stacked spectrum for this iteration interpolated on the native wavelength grid of the fluxes exposures. shape=(nspec, nexp)

  • sn_clip (float, optional) – Errors are capped in output rejivars so that the S/N is never greater than sn_clip. This prevents overly aggressive rejection in high S/N ratio spectra which neverthless differ at a level greater than the implied S/N due to systematics.

  • title (str, optional) – Title for QA plot

  • debug (bool, optional) – If True, show QA plots useful for debuggin.

Returns:

  • rejivars: numpy.ndarray, (nspec, nexp): Updated inverse variances to be used in rejection

  • sigma_corrs, numpy.ndarray, (nexp): Array of correction factors applied to the original ivars to get the new rejivars

  • outchi: numpy.ndarray, (nspec, nexp): The original chi=(fluxes-fluxes_stack)*np.sqrt(ivars) used to determine the correction factors. This quantity is useful for plotting. Note that the outchi is computed using the original non-corrected errors.

  • maskchi: numpy.ndarray, bool, (nspec, nexp): Mask returned by renormalize_erorrs indicating which pixels were used in the computation of the correction factors. This is basically the union of the input masks but with chi > clip (clip=6.0 is the default) values clipped out.

Return type:

tuple

pypeit.core.coadd.weights_qa(waves, weights, gpms, title='', colors=None)[source]

Routine to make a QA plot for the weights used to compute a stacked spectrum.

Parameters:
  • wave (list) – List of numpy.ndarray float 1d wavelength arrays for spectra that went into a stack.

  • weights (list) – List of numpy.ndarray float 1d (S/N)^2 weight arrays for the exposures that went into a stack. This would have been computed by sn_weights

  • gpm (list) – List of numpy.ndarray boolean 1d good-pixel mask arrays for the exposures that went into a stack. Good=True.

  • title (str, optional) – Title for the plot.