pypeit.core.wavecal.wvutils module

Module for basic utilties with holy grail

pypeit.core.wavecal.wvutils.arc_lines_from_spec(spec, sigdetect=10.0, fwhm=4.0, fit_frac_fwhm=1.25, cont_frac_fwhm=1.0, max_frac_fwhm=2.0, cont_samp=30, niter_cont=3, nonlinear_counts=10000000000.0, debug=False)[source]

Simple wrapper to arc.detect_lines. See that code for docs

Parameters:
  • spec

  • sigdetect

  • fwhm

  • fit_frac_fwhm

  • cont_frac_fwhm

  • max_frac_fwhm

  • cont_samp

  • niter_cont

  • nonlinear_counts (float, optional) – Counts where the arc is presumed to go non-linear

  • debug

Returns:

all_tcent, all_ecent, cut_tcent, icut, arc_cont_sub. See arc.detect_lines for details

Return type:

tuple

pypeit.core.wavecal.wvutils.get_delta_wave(wave, wave_gpm, frac_spec_med_filter=0.03)[source]

Compute the change in wavelength per pixel.

Given an input wavelength vector and an input good pixel mask, the raw change in wavelength is defined to be delta_wave[i] = wave[i+1]-wave[i], with delta_wave[-1] = delta_wave[-2] to force wave and delta_wave to have the same length.

The method imposes a smoothness on the change in wavelength by (1) running a median filter over the raw values and (2) smoothing delta_wave with a Gaussian kernel. The boxsize for the median filter is set by frac_spec_med_filter, and the \(\sigma\) for the Gaussian kernel is either a 10th of that boxsize or 3 pixels (whichever is larger).

Parameters:
  • wave (float numpy.ndarray, shape = (nspec,)) – Array of input wavelengths. Must be 1D.

  • wave_gpm (bool numpy.ndarray, shape = (nspec)) – Boolean good-pixel mask defining where the wave values are good.

  • frac_spec_med_filter (float, optional) – Fraction of the length of the wavelength vector to use to median filter the raw change in wavelength, used to impose a smoothness. Default is 0.03, which means the boxsize for the running median filter will be approximately 0.03*nspec (forced to be an odd number of pixels).

Returns:

delta_wave – A smooth estimate for the change in wavelength for each pixel in the input wavelength vector.

Return type:

numpy.ndarray, float, shape = (nspec,)

pypeit.core.wavecal.wvutils.get_sampling(waves, pix_per_R=3.0)[source]

Computes the median wavelength sampling of wavelength vector(s)

Parameters:
  • waves (list, numpy.ndarray) – List of numpy.ndarray wavelength arrays or a single 1d ‘numpy.ndarray’_

  • pix_per_R (float) – Number of pixels per resolution element used for the resolution guess. The default of 3.0 assumes roughly Nyquist sampling.

Returns:

Returns dlam, dloglam, resln_guess, pix_per_sigma. Computes the median wavelength sampling of the wavelength vector(s)

Return type:

tuple

pypeit.core.wavecal.wvutils.get_wave_grid(waves=None, gpms=None, wave_method='linear', iref=0, wave_grid_min=None, wave_grid_max=None, dwave=None, dv=None, dloglam=None, wave_grid_input=None, spec_samp_fact=1.0)[source]

Create a new wavelength grid for spectra to be rebinned and coadded.

Parameters:
  • waves (list) – List of the numpy.ndarray N original 1d wavelength arrays. Shapes of the input arrays are arbitrary. Required unless wave_method=’user_input’ in which case it need not be passed in.

  • gpms (list) – Good-pixel mask for wavelengths. Same format as waves and shapes of the individual arrays must match.

  • wave_method (str, optional) –

    Desired method for creating new wavelength grid:

    • ’iref’ – Use the first wavelength array (default)

    • ’velocity’ – Grid is uniform in velocity

    • ’log10’ – Grid is uniform in log10(wave). This is the same as velocity.

    • ’linear’ – Constant pixel grid

    • ’concatenate’ – Meld the input wavelength arrays

    • ’user_input’ – Use a user input wavelength grid. wave_grid_input must be set for this option.

  • iref (int, optional) – Index in waves array for reference spectrum

  • wave_grid_min (float, optional) – min wavelength value for the final grid

  • wave_grid_max (float, optional) – max wavelength value for the final grid

  • dwave (float, optional) – Pixel size in same units as input wavelength array (e.g. Angstroms). Used with the ‘linear’ method. If not input, the median pixel size is calculated and used.

  • dv (float, optional) – Pixel size in km/s for ‘velocity’ method. If not input, the median km/s per pixel is calculated and used

  • dloglam (float, optional) – Pixel size in log10(wave) for the log10 or velocity method.

  • spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units spec_samp_fact are pixels.

  • wave_grid_input (numpy.ndarray, optional) – User input wavelength grid to be used with the ‘user_input’ wave_method. Shape is (nspec_input,)

Returns:

Returns two numpy.ndarray objects and a float:

  • wave_grid: (ndarray, (ngrid +1,)) New wavelength grid, not masked. This is a set of bin edges (rightmost edge for the last bin and leftmost edges for the rest), while wave_grid_mid is a set of bin centers, hence wave_grid has 1 more value than wave_grid_mid.

  • wave_grid_mid: ndarray, (ngrid,) New wavelength grid evaluated at the centers of the wavelength bins, that is this grid is simply offset from wave_grid by dsamp/2.0, in either linear space or log10 depending on whether linear or (log10 or velocity) was requested. Last bin center is removed since it falls outside wave_grid. For iref or concatenate, the linear wavelength sampling will be calculated.

  • dsamp: The pixel sampling for wavelength grid created.

Return type:

tuple

pypeit.core.wavecal.wvutils.get_xcorr_arc(inspec1, sigdetect=5.0, sig_ceil=10.0, percent_ceil=50.0, use_raw_arc=False, fwhm=4.0, debug=False)[source]

Utility routine to create a synthetic arc spectrum for cross-correlation using the location of the peaks in the input spectrum.

Parameters:
  • inspec1 (numpy.ndarray) – Input spectrum, shape = (nspec,)

  • sigdetect (float, optional, default=3.0) – Peak finding threshold for lines that will be used to create the synthetic xcorr_arc

  • sig_ceil (float, optional, default = 10.0) – Significance threshold for peaks that will be used to determine the line amplitude clipping threshold. For peaks with significance > sig_ceil, the code will find the amplitude corresponding to perecent_ceil, and this will be the clipping threshold.

  • percent_ceil (float, optional, default=50.0) – Upper percentile threshold for thresholding positive and negative values. If set to None, no thresholding will be performed.

  • use_raw_arc (bool, optional) – If True, use amplitudes from the raw arc, i.e. do not continuum subtract. Default = False

  • fwhm (float, optional) – Fwhm of arc lines. Used for peak finding and to assign a fwhm in the xcorr_arc.

  • debug (bool, optional) – Show plots for line detection debugging. Default = False

Returns:

Synthetic arc spectrum to be used for cross-correlations, shape = (nspec,)

Return type:

numpy.ndarray

pypeit.core.wavecal.wvutils.parse_param(par, key, slit)[source]
pypeit.core.wavecal.wvutils.shift_and_stretch(spec, shift, stretch)[source]

Utility function to shift and stretch a spectrum. This operation is being implemented in many steps and could be significantly optimized. But it works for now. Note that the stretch is applied first and then the shift is applied in stretch coordinates.

Parameters:
  • spec (ndarray) – spectrum to be shited and stretch

  • shift (float) – shift to be applied

  • stretch (float) – stretch to be applied

Returns:

spec_out – shifted and stretch spectrum. Regions where there is no information are set to zero.

Return type:

ndarray

pypeit.core.wavecal.wvutils.wavegrid(wave_min, wave_max, dwave, spec_samp_fact=1.0, log10=False)[source]

Utility routine to generate a uniform grid of wavelengths

Parameters:
  • wave_min (float) – Mininum wavelength. Must be linear even if log10 is requested

  • wave_max (float) – Maximum wavelength. Must be linear even if log10 is requested.

  • dwave (float) – Delta wavelength interval. Must be linear if log10=False, or log10 if log10=True

  • spec_samp_fact (float, optional) – Make the wavelength grid sampling finer (spec_samp_fact < 1.0) or coarser (spec_samp_fact > 1.0) by this sampling factor. This basically multiples the ‘native’ spectral pixels by spec_samp_fact, i.e. units of spec_samp_fact are pixels.

  • log10 (bool, optional) – Return a geometric wavelength grid with steps of constant log base 10 in wavelength.

Returns:

  • wave_grid (numpy.ndarray, (ngrid +1,)) – New wavelength grid, not masked. This is a set of bin edges (rightmost edge for the last bin and leftmost edges for the rest), while wave_grid_mid is a set of bin centers, hence wave_grid has 1 more value than wave_grid_mid.

  • wave_grid_mid (numpy.ndarray, (ngrid,)) – New wavelength grid evaluated at the centers of the wavelength bins, that is this grid is simply offset from wave_grid by dsamp/2.0, in either linear space or log10 depending on whether linear or (log10 or velocity) was requested. Last bin center is removed since it falls outside wave_grid. For iref or concatenate, the linear wavelength sampling will be calculated.

  • dsamp (float) – The pixel sampling for wavelength grid created.

pypeit.core.wavecal.wvutils.write_template(nwwv, nwspec, binspec, outpath, outroot, det_cut=None, order=None, overwrite=True, cache=False)[source]

Write the template spectrum into a binary FITS table

Parameters:
  • nwwv (numpy.ndarray) – Wavelengths for the template

  • nwspec (numpy.ndarray) – Flux of the template

  • binspec (int) – Binning of the template

  • outpath (str) – Directory to store the wavelength template file

  • outroot (str) – Filename to use for the template

  • det_cut (bool, optional) – Cuts in wavelength for detector snippets Used primarily for DEIMOS

  • order (numpy.ndarray, optional) – Echelle order numbers

  • overwrite (bool, optional) – If True, overwrite any existing file

  • cache (bool, optional) – Store the wavelength solution in the pypeit cache?

pypeit.core.wavecal.wvutils.xcorr_shift(inspec1, inspec2, percent_ceil=50.0, use_raw_arc=False, sigdetect=5.0, sig_ceil=10.0, fwhm=4.0, do_xcorr_arc=True, lag_range=None, debug=False)[source]

Determine the shift inspec2 relative to inspec1. This routine computes the shift by finding the maximum of the cross-correlation coefficient. The convention for the shift is that positive shift means inspec2 is shifted to the right (higher pixel values) relative to inspec1.

Parameters:
  • inspec1 (numpy.ndarray) – Reference spectrum

  • inspec2 (numpy.ndarray) – Spectrum for which the shift and stretch are computed such that it will match inspec1

  • sigdetect (float, optional, default=3.0) – Peak finding threshold for lines that will be used to create the synthetic xcorr_arc

  • sig_ceil (float, optional, default = 10.0) – Significance threshold for peaks that will be used to determine the line amplitude clipping threshold. For peaks with significance > sig_ceil, the code will find the amplitude corresponding to perecent_ceil, and this will be the clipping threshold.

  • percent_ceil (float, default=90.0) – Apply a ceiling to the input spectra at the percent_ceil percentile level of the distribution of peak amplitudes. This prevents extremely strong lines from completely dominating the cross-correlation, which can causes the cross-correlation to have spurious noise spikes that are not the real maximum.

  • use_raw_arc (bool, default = False) – If this parameter is True the raw arc will be used rather than the continuum subtracted arc

  • do_xcorr_arc (bool, default = True) – If this parameter is True, peak finding will be performed and a synthetic arc will be created to be used for the cross-correlations. If a synthetic arc has already been created by get_xcorr_arc, then set this to False

  • lag_range (tuple, default = None) – A tuple of the form (lag_min, lag_max) which sets the range of lags to search over. If None, the full range of lags will be searched.

  • debug (boolean, default = False) – Produce debugging plot

Returns:

  • shift (float) – the shift which was determined

  • cross_corr (float) – the maximum of the cross-correlation coefficient at this shift

pypeit.core.wavecal.wvutils.xcorr_shift_stretch(inspec1, inspec2, cc_thresh=-1.0, percent_ceil=50.0, use_raw_arc=False, lag_range=None, shift_mnmx=(-0.2, 0.2), stretch_mnmx=(0.95, 1.05), sigdetect=5.0, sig_ceil=10.0, fwhm=4.0, debug=False, toler=1e-05, seed=None)[source]

Determine the shift and stretch of inspec2 relative to inspec1. This routine computes an initial guess for the shift via maximimizing the cross-correlation. It then performs a two parameter search for the shift and stretch by optimizing the zero lag cross-correlation between the inspec1 and the transformed inspec2 (shifted and stretched via wvutils.shift_and_stretch()) in a narrow window about the initial estimated shift. The convention for the shift is that positive shift means inspec2 is shifted to the right (higher pixel values) relative to inspec1. The convention for the stretch is that it is float near unity that increases the size of the inspec2 relative to the original size (which is the size of inspec1)

Parameters:
  • inspec1 (ndarray) – Reference spectrum

  • inspec2 (ndarray) – Spectrum for which the shift and stretch are computed such that it will match inspec1

  • cc_thresh (float, default = -1.0) – A number in the range [-1.0,1.0] which is the threshold on the initial cross-correlation coefficient for the shift/stretch. If the value of the initial cross-correlation is < cc_thresh the code will just exit and return this value and the best shift. This is desirable behavior since the shif/stretch optimization is slow and this allows one to test how correlated the spectra are before attempting it, since there is little value in that expensive computation for spectra with little overlap. The default cc_thresh =-1.0 means shift/stretch is always attempted since the cross correlation coeficcient cannot be less than -1.0.

  • sigdetect (float, optional, default=3.0) – Peak finding threshold for lines that will be used to create the synthetic xcorr_arc

  • sig_ceil (float, optional, default = 10.0) – Significance threshold for peaks that will be used to determine the line amplitude clipping threshold. For peaks with significance > sig_ceil, the code will find the amplitude corresponding to perecent_ceil, and this will be the clipping threshold.

  • percent_ceil (float, default=80.0) – Apply a ceiling to the input spectra at the percent_ceil percentile level of the distribution of peak amplitudes. This prevents extremely strong lines from completely dominating the cross-correlation, which can causes the cross-correlation to have spurious noise spikes that are not the real maximum.

  • use_raw_arc (bool, default = False) – If this parameter is True the raw arc will be used rather than the continuum subtracted arc

  • lag_range (tuple of floats, default = None) – Range to search for the shift in the cross correlation. The code will search the window [lag_range[0],lag_range[1]]. If None, the code will search the window [shift_cc + nspec*shift_mnmx[0],shift_cc + nspec*shift_mnmx[1]] where nspec is the spectral dimension and shift_cc is the initial cross-correlation shift.

  • shift_mnmx (tuple of floats, default = (-0.05,0.05)) – Range to search for the shift in the optimization about the initial cross-correlation based estimate of the shift. The optimization will search the window (shift_cc + nspec*shift_mnmx[0],shift_cc + nspec*shift_mnmx[1]) where nspec is the number of pixels in the spectrum

  • stretch_mnmx (tuple of floats, default = (0.97,1.03)) – Range to search for the stretch in the optimization. The code may not work well if this range is significantly expanded because the linear approximation used to transform the arc starts to break down.

  • seed (int or np.random.RandomState, optional, default = None) – Seed for scipy.optimize.differential_evolution optimizer. If not specified, the calculation will not be repeatable

  • toler (float) – Tolerance for differential evolution optimizaiton.

  • False (debug =) – Show plots to the screen useful for debugging.

Returns:

  • success (int) –

    A flag indicating the exit status. Values are:

    • success = 1, shift and stretch performed via sucessful optimization

    • success = 0, shift and stretch optimization failed

    • success = -1, x-correlation is below cc_thresh

  • shift (float) – the optimal shift which was determined. If cc_thresh is set, and the initial cross-correlation is < cc_thresh, then this will be just the cross-correlation shift

  • stretch (float) – the optimal stretch which was determined. If cc_thresh is set, and the initial cross-correlation is < cc_thresh, then this will be just be 1.0

  • cross_corr (float) – the value of the cross-correlation coefficient at the optimal shift and stretch. This is a number between zero and unity, which unity indicating a perfect match between the two spectra.

  • shift_init (float) – The initial shift determined by maximizing the cross-correlation coefficient without allowing for a stretch. If cc_thresh is set, and the initial cross-correlation is < cc_thresh, this will be just the shift from the initial cross-correlation

  • cross_corr_init (float) – The maximum of the initial cross-correlation coefficient determined without allowing for a stretch. If cc_thresh is set, and the initial cross-correlation is < cc_thresh, this will be just the initial cross-correlation

pypeit.core.wavecal.wvutils.zerolag_shift_stretch(theta, y1, y2)[source]

Utility function which is run by the differential evolution optimizer in scipy. This is the fucntion we optimize. It is the zero lag cross-correlation coefficient of spectrum with a shift and stretch applied.

Parameters:
  • theta (float numpy.ndarray) – Function parameters to optmize over. theta[0] = shift, theta[1] = stretch

  • y1 (float numpy.ndarray, shape = (nspec,)) – First spectrum which acts as the refrence

  • y2 (float numpy.ndarray, shape = (nspec,)) – Second spectrum which will be transformed by a shift and stretch to match y1

Returns:

corr_norm – Negative of the zero lag cross-correlation coefficient (since we are miniziming with scipy.optimize). scipy.optimize will thus determine the shift,stretch that maximize the cross-correlation.

Return type:

float