pypeit.utils module

General utility functions.

pypeit.utils.DFS(v: int, visited: list[bool], group: list[int], adj: ndarray)[source]

Depth-First Search of graph given by matrix adj starting from v. Updates visited and group.

Parameters:
  • v (int) – initial vertex

  • visited (List[bool]) – List keeping track of which vertices have been visited at any point in traversing the graph. visited[i] is True iff vertix i has been visited before.

  • group (List[int]) – List keeping track of which vertices have been visited in THIS CALL of DFS. After DFS returns, group contains all members of the connected component containing v. i in group is True iff vertex i has been visited in THIS CALL of DFS.

  • adj (numpy.ndarray) – Adjacency matrix description of the graph. adj[i,j] is True iff there is a vertex between i and j.

pypeit.utils._lhscentered(rng, n, samples)[source]
pypeit.utils._lhsclassic(rng, n, samples)[source]
pypeit.utils._lhscorrelate(rng, n, samples, iterations)[source]
pypeit.utils._lhsmaximin(rng, n, samples, iterations, lhstype)[source]
pypeit.utils._pdist(x)[source]

Calculate the pair-wise point distances of a matrix

Parameters:

x (numpy.ndarray) – An m-by-n array of scalars, where there are m points in n dimensions.

Returns:

d – A 1-by-b array of scalars, where b = m*(m - 1)/2. This array contains all the pair-wise point distances, arranged in the order (1, 0), (2, 0), …, (m-1, 0), (2, 1), …, (m-1, 1), …, (m-1, m-2).

Return type:

numpy.ndarray

Examples

>>> x = np.array([[0.1629447, 0.8616334],
...               [0.5811584, 0.3826752],
...               [0.2270954, 0.4442068],
...               [0.7670017, 0.7264718],
...               [0.8253975, 0.1937736]])
>>> _pdist(x)
array([ 0.6358488,  0.4223272,  0.6189940,  0.9406808,  0.3593699,
        0.3908118,  0.3087661,  0.6092392,  0.6486001,  0.5358894])
pypeit.utils.add_sub_dict(d, key)[source]

If a key is not present in the provided dictionary, add it as a new nested dictionary.

Parameters:
  • d (dict) – Dictionary to alter

  • key (str) – Key to add

Examples

>>> d = {}
>>> add_sub_dict(d, 'test')
>>> d
{'test': {}}
>>> d['test'] = 'this'
>>> add_sub_dict(d, 'test')
>>> d
{'test': 'this'}
>>> add_sub_dict(d, 'and')
>>> d['and'] = 'that'
>>> d
{'test': 'this', 'and': 'that'}
pypeit.utils.all_subclasses(cls)[source]

Collect all the subclasses of the provided class.

The search follows the inheritance to the highest-level class. Intermediate base classes are included in the returned set, but not the base class itself.

Thanks to: https://stackoverflow.com/questions/3862310/how-to-find-all-the-subclasses-of-a-class-given-its-name

Parameters:

cls (object) – The base class

Returns:

The unique set of derived classes, including any intermediate base classes in the inheritance thread.

Return type:

set

pypeit.utils.arr_setup_to_setup_list(arr_setup)[source]

This utility routine converts an arr_setup list to a setup_list. The arr_setup list and setup_lists are defined as follows, for e.g. echelle wavelengths waves. See ech_combspec() for further details.

  • arr_setup is a list of length nsetups, one for each setup. Each element is a numpy array with shape = (nspec, norder, nexp), which is the data model for echelle spectra for an individual setup. The utiltities arr_setup_to_setup_list() and setup_list_to_arr() convert between arr_setup and setup_list.

  • setup_list is a list of length nsetups, one for each setup. Each element is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) , e.g., wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number. The utility functions echarr_to_echlist() and echlist_to_echarr() convert between the multi-dimensional numpy arrays in the arr_setup and the lists of numpy arrays in setup_list.

Parameters:

arr_setup (list) – A list of length nsetups echelle output arrays of shape=(nspec, norders, nexp).

Returns:

setup_list – List of length nsetups. Each element of the setup list is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number.

Return type:

list

pypeit.utils.array_to_explist(array, nspec_list=None)[source]

Unfold a padded 2D array into a list of length nexp 1d arrays with sizes set by nspec_list

Parameters:
  • array (numpy.ndarray) – A 2d array of shape (nspec_max, nexp) where nspec_max is the maximum size of any of the spectra in the array.

  • nspec_list (list, optional) – List containing the size of each of the spectra embedded in the array. If None, the routine will assume that all the spectra are the same size equal to array.shape[0]

Returns:

explist – A list of 1d arrays of shape (nspec_max, nexp) where nspec_max is the maximum size of any of the members of the input nexp_list. The data type is the same as the data type in the original 1d arrays.

Return type:

list

pypeit.utils.boxcar_smooth_rows(img, nave, wgt=None, mode='nearest', replace='original')[source]

Boxcar smooth an image along their first axis (rows).

Constructs a boxcar kernel and uses scipy.ndimage.convolve to smooth the image. Smoothing does not account for any masking.

Note

For images following the PypeIt convention, this smooths the data spectrally for each spatial position.

Parameters:
  • img (numpy.ndarray) – Image to convolve.

  • nave (int) – Number of pixels along rows for smoothing.

  • wgt (numpy.ndarray, optional) – Image providing weights for each pixel in img. Uniform weights are used if none are provided.

  • mode (str, optional) – See scipy.ndimage.convolve.

Returns:

The smoothed image

Return type:

numpy.ndarray

pypeit.utils.calc_ivar(varframe)[source]

Calculate the inverse variance based on the input array

Wrapper to inverse()

Parameters:

varframe (numpy.ndarray) – Variance image

Returns:

Inverse variance image

Return type:

numpy.ndarray

pypeit.utils.clip_ivar(flux, ivar, sn_clip, gpm=None, verbose=False)[source]

Add an error floor the the inverse variance array.

This is primarily to prevent too much rejection at high-S/N (i.e. standard stars, bright objects).

Parameters:
  • flux (numpy.ndarray) – Flux array

  • ivar (numpy.ndarray) – Inverse variance array

  • sn_clip (float) – This sets the small erorr that is added to the input ivar such that the output inverse variance will never give S/N greater than sn_clip. This prevents overly aggressive rejection in high S/N spectra, which nevertheless differ at a level greater than the formal S/N due to systematics. If None, the input inverse variance array is simply returned.

  • gpm (numpy.ndarray, optional) – Good-pixel mask for the input fluxes.

  • verbose (bool, optional) – Write status messages to the terminal.

Returns:

The new inverse variance matrix that yields a S/N upper limit.

Return type:

numpy.ndarray

pypeit.utils.concat_to_setup_list(concat, norders, nexps)[source]

This routine converts from a concat list to a setup_list list. The concat list and setup_lists are defined as follows. See ech_combspec() for further details.

  • concat is a list of length \(\Sum_i N_{{\rm order},i} N_{{\rm exp},i}\) where \(i\) runs over the setups. The elements of the list contains a numpy array of, e.g., wavelengths for the setup, order, exposure in question. The utility routines setup_list_to_concat() and concat_to_setup_list() convert between setup_lists and concat.

  • setup_list is a list of length nsetups, one for each setup. Each element is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) , e.g., wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number. The utility functions echarr_to_echlist() and echlist_to_echarr() convert between the multi-dimensional numpy arrays in the arr_setup and the lists of numpy arrays in setup_list.

Parameters:
  • concat (list) – List of length \(\Sum_i N_{{\rm orders},i} N_{{\rm exp},i}\) of numpy arrays describing an echelle spectrum where \(i\) runs over the number of setups.

  • norders (list) – List of length nsetups containing the number of orders for each setup.

  • nexps (list) – List of length nexp containing the number of exposures for each setup.

  • setup_list (list, list of length nsetups) – Each element of the setup list is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number.

pypeit.utils.contiguous_true(m)[source]

Find contiguous regions of True values in a boolean numpy array.

This is identically what is done by numpy.ma.flatnotmasked_contiguous, except the argument is the mask, not a masked array, and it selects contiguous True regions instead of contiguous False regions.

Parameters:

m (array-like) – A boolean array. Must be 1D.

Returns:

A list of slice objects that select contiguous regions of True values in the provided array.

Return type:

list

pypeit.utils.convolve_fft(img, kernel, msk)[source]

Convolve img with an input kernel using an FFT. Following the FFT, a slower convolution is used to estimate the convolved image near the masked pixels.

Note

For images following the PypeIt convention, this smooths the data in the spectral direction for each spatial position.

Parameters:
  • img (numpy.ndarray) – Image to convolve, shape = (nspec, nspat)

  • kernel (numpy.ndarray) – 1D kernel to use when convolving the image in the spectral direction

  • msk (numpy.ndarray) – Mask of good pixels (True=good pixel). This should ideally be a slit mask, where a True value represents a pixel on the slit, and a False value is a pixel that is not on the slit. Image shape should be the same as img

Returns:

The convolved image, same shape as the input img

Return type:

numpy.ndarray

pypeit.utils.cross_correlate(x, y, maxlag)[source]

Cross correlation with a maximum number of lags. This computes the same result as:

numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]

Edges are padded with zeros using np.pad(mode='constant').

Parameters:
  • x (numpy.ndarray) – First vector of the cross-correlation.

  • y (numpy.ndarray) – Second vector of the cross-correlation. x and y must be one-dimensional numpy arrays with the same length.

  • maxlag (int) – The maximum lag for which to compute the cross-correlation. The cross correlation is computed at integer lags from (-maxlag, maxlag)

Returns:

  • lags (numpy.ndarray, shape = (2*maxlag + 1)) – Lags for the cross-correlation. Integer spaced values from (-maxlag, maxlag).

  • xcorr (numpy.ndarray, shape = (2*maxlag + 1)) – Cross-correlation at the lags

pypeit.utils.distinct_colors(num_colors)[source]

Return n distinct colors from the specified matplotlib colormap. Taken from:

https://stackoverflow.com/questions/470690/how-to-automatically-generate-n-distinct-colors

Parameters:

num_colors (int) – Number of colors to return.

Returns:

An array with shape (n,3) with the RGB values for

the requested number of colors.

Return type:

numpy.ndarray

pypeit.utils.echarr_to_echlist(echarr)[source]

Convert an echelle array to a list of 1d arrays.

Parameters:

echarr (numpy.ndarray) – An echelle array of shape (nspec, norder, nexp).

Returns:

  • echlist (list) – A unraveled list of 1d arrays of shape (nspec,) where the norder dimension is the fastest varying dimension and the nexp dimension is the slowest varying dimension.

  • shape (tuple) – The shape of the provided echelle array (see echarr).

pypeit.utils.echlist_to_echarr(echlist, shape)[source]

Convert a list of 1d arrays to a 3d echelle array in the format in which echelle outputs are stored, i.e. with shape (nspec, norder, nexp).

Parameters:
  • echlist (list) – A unraveled list of 1d arrays of shape (nspec,) where the norder dimension is the fastest varying dimension and the nexp dimension is the slowest varying dimension.

  • shape (tuple) – The shape of the echelle array to be returned, i.e. a tuple containing (nspec, norder, nexp)

Returns:

echarr – An echelle spectral format array of shape (nspec, norder, nexp).

Return type:

numpy.ndarray

pypeit.utils.embed_header()[source]

Nominal header for an execution of IPython.embed.

Example

To include the returned string:

from IPython import embed
from pypeit.utils import embed_header

embed(header=embed_header())
Returns:

String with the line in the calling module, the name of the calling function, and the name of the calling file.

Return type:

str

pypeit.utils.explist_to_array(explist, pad_value=0.0)[source]

Embed a list of length nexp 1d arrays of arbitrary size in a 2d array.

Parameters:
  • explist (list) – List of length nexp containing 1d arrays of arbitrary size.

  • pad_value (scalar-like) – Value to use for padding the missing locations in the 2d array. The data type should match the data type of in the 1d arrays in nexp_list.

Returns:

array – A 2d array of shape (nspec_max, nexp) where nspec_max is the maximum size of any of the members of the input nexp_list. The data type is the same as the data type in the original 1d arrays.

Return type:

numpy.ndarray

pypeit.utils.fast_running_median(seq, window_size)[source]

Compute the median of sequence of numbers with a running window. The boundary conditions are identical to the scipy ‘reflect’ boundary codition:

‘reflect’ (d c b a | a b c d | d c b a)

The input is extended by reflecting about the edge of the last pixel.

This code has been confirmed to produce identical results to scipy.ndimage.median_filter with the reflect boundary condition, but is ~ 100 times faster.

Code originally contributed by Peter Otten, made to be consistent with scipy.ndimage.median_filter by Joe Hennawi.

Now makes use of the Bottleneck library https://pypi.org/project/Bottleneck/.

Parameters:
  • seq (list, numpy.ndarray) – 1D array of values

  • window_size (int) – size of running window.

Returns:

median filtered values

Return type:

numpy.ndarray

pypeit.utils.find_nearest(array, values)[source]

For all elements of values, find the index of the nearest value in array

Parameters:
Returns:

idxs – indices of array that are closest to each element of value

Return type:

numpy.ndarray

pypeit.utils.find_single_file(file_pattern, required: bool = False) Path[source]

Find a single file matching a wildcard pattern.

Parameters:
  • file_pattern (str) – A filename pattern, see the python ‘glob’ module.

  • required (bool, optional) – If True and no files are found, an error is raised.

Returns:

A file name, or None if no filename was found. This will give a warning if multiple files are found and return the first one.

Return type:

pathlib.Path

pypeit.utils.get_time_string(codetime)[source]

Utility function that takes the codetime and converts this to a human readable String.

Parameters:

codetime (float) – Code execution time in seconds (usually the difference of two time.time() calls)

Returns:

A string indicating the total execution time

Return type:

str

pypeit.utils.growth_lim(a, lim, fac=1.0, midpoint=None, default=[0.0, 1.0])[source]

Calculate bounding limits for an array based on its growth.

Parameters:
  • a (array-like) – Array for which to determine limits.

  • lim (float) – Percentage of the array values to cover. Set to 1 if provided value is greater than 1.

  • fac (float, optional) – Factor to increase the range based on the growth limits. Default is no increase.

  • midpoint (float, optional) – Force the midpoint of the range to be centered on this value. Default is the sample median.

  • default (list, optional) – Default limits to return if a has no data.

Returns:

Lower and upper boundaries for the data in a.

Return type:

list

pypeit.utils.index_of_x_eq_y(x, y, strict=False)[source]

Return an index array that maps the elements of x to those of y.

This should return the index of the first element in array x equal to the associated value in array y. Inspired by: https://tinyurl.com/yyrx8acf

Parameters:
  • x (numpy.ndarray) – 1D parent array

  • y (numpy.ndarray) – 1D reference array

  • strict (bool, optional) –

    Raise an exception unless every element of y is found in x. I.e., it must be true that:

    np.array_equal(x[index_of_x_eq_y(x,y)], y)
    

Returns:

An array with index of x that is equal to the given value of y. Output shape is the same as y.

Return type:

numpy.ndarray

pypeit.utils.inverse(array)[source]

Calculate and return the inverse of the input array, enforcing positivity and setting values <= 0 to zero. The input array should be a quantity expected to always be positive, like a variance or an inverse variance. The quantity:

out = (array > 0.0)/(np.abs(array) + (array == 0.0))

is returned.

Parameters:

array (numpy.ndarray) – Array to invert

Returns:

Result of controlled 1/array calculation.

Return type:

numpy.ndarray

pypeit.utils.is_float(s)[source]

Detertmine if a string can be converted to a floating point number.

pypeit.utils.lhs(n, samples=None, criterion=None, iterations=None, seed_or_rng=12345)[source]

Generate a latin-hypercube design

Parameters:
  • n (int) – The number of factors to generate samples for

  • Optional

  • --------

  • samples (int) – The number of samples to generate for each factor (Default: n)

  • criterion (str) – Allowable values are “center” or “c”, “maximin” or “m”, “centermaximin” or “cm”, and “correlation” or “corr”. If no value given, the design is simply randomized.

  • iterations (int) – The number of iterations in the maximin and correlations algorithms (Default: 5).

Returns:

H – An n-by-samples design matrix that has been normalized so factor values are uniformly spaced between zero and one.

Return type:

numpy.ndarray

Example

A 3-factor design (defaults to 3 samples):

>>> lhs(3)
array([[ 0.40069325,  0.08118402,  0.69763298],
       [ 0.19524568,  0.41383587,  0.29947106],
       [ 0.85341601,  0.75460699,  0.360024  ]])

A 4-factor design with 6 samples:

>>> lhs(4, samples=6)
array([[ 0.27226812,  0.02811327,  0.62792445,  0.91988196],
       [ 0.76945538,  0.43501682,  0.01107457,  0.09583358],
       [ 0.45702981,  0.76073773,  0.90245401,  0.18773015],
       [ 0.99342115,  0.85814198,  0.16996665,  0.65069309],
       [ 0.63092013,  0.22148567,  0.33616859,  0.36332478],
       [ 0.05276917,  0.5819198 ,  0.67194243,  0.78703262]])

A 2-factor design with 5 centered samples:

>>> lhs(2, samples=5, criterion='center')
array([[ 0.3,  0.5],
       [ 0.7,  0.9],
       [ 0.1,  0.3],
       [ 0.9,  0.1],
       [ 0.5,  0.7]])

A 3-factor design with 4 samples where the minimum distance between all samples has been maximized:

>>> lhs(3, samples=4, criterion='maximin')
array([[ 0.02642564,  0.55576963,  0.50261649],
       [ 0.51606589,  0.88933259,  0.34040838],
       [ 0.98431735,  0.0380364 ,  0.01621717],
       [ 0.40414671,  0.33339132,  0.84845707]])

A 4-factor design with 5 samples where the samples are as uncorrelated as possible (within 10 iterations):

>>> lhs(4, samples=5, criterion='correlate', iterations=10)
pypeit.utils.list_of_spectral_lines()[source]

Generate a list of spectral lines

Returns:

Two numpy.ndarray objects.

Return type:

tuple

pypeit.utils.load_pickle(fname)[source]

Load a python pickle file

Parameters:

fname (str) – Filename

Returns:

An object suitable for pickle serialization.

Return type:

object

pypeit.utils.nan_mad_std(data, axis=None, func=None)[source]

Wrapper for astropy.stats.mad_std which ignores nans, so as to prevent bugs when using sigma_clipped_stats with the axis keyword and stdfunc=astropy.stats.mad_std

Parameters:
  • data (array-like) – Data array or object that can be converted to an array.

  • axis (int, tuple, optional) – Axis along which the robust standard deviations are computed. The default (None) is to compute the robust standard deviation of the flattened array.

Returns:

The robust standard deviation of the input data. If axis is None then a scalar will be returned, otherwise a numpy.ndarray will be returned.

Return type:

float, numpy.ndarray

pypeit.utils.nearest_unmasked(arr, use_indices=False)[source]

Return the indices of the nearest unmasked element in a vector.

Warning

The function uses the values of the masked data for masked elements. This means that if you want to know the nearest unmasked element to one of the masked elements, the data attribute of the provided array should have meaningful values for these masked elements.

Parameters:
  • arr (numpy.ma.MaskedArray) – Array to analyze. Must be 1D.

  • use_indices (bool, optional) – The proximity of each element in the array is based on the difference in the array data values. Setting use_indices to True instead bases the calculation on the proximity of the element indices; i.e., find the index of the nearest unmasked element.

Returns:

Integer array with the indices of the nearest array elements, the definition of which depends on use_indices.

Return type:

numpy.ndarray

pypeit.utils.polyfit2d(x, y, z, order=3)[source]

Generate 2D polynomial

pypeit.utils.polyfitter2d(data, mask=None, order=2)[source]

2D fitter

pypeit.utils.polyval2d(x, y, m)[source]

Generate 2D polynomial

pypeit.utils.pyplot_rcparams()[source]

params for pretty matplotlib plots

pypeit.utils.pyplot_rcparams_default()[source]

restore default rcparams

pypeit.utils.rebinND(img, shape)[source]

Rebin a 2D image to a smaller shape. For example, if img.shape=(100,100), then shape=(10,10) would take the mean of the first 10x10 pixels into a single output pixel, then the mean of the next 10x10 pixels will be output into the next pixel. Note that img.shape must be an integer multiple of the elements in the new shape.

Parameters:
  • img (numpy.ndarray) – A 2D input image

  • shape (tuple) – The desired shape to be returned. The elements of img.shape should be an integer multiple of the elements of shape.

Returns:

The input image rebinned to shape

Return type:

numpy.ndarray

pypeit.utils.rebin_slice(a, newshape)[source]

Rebin an array to a new shape using slicing. This routine is taken from: https://scipy-cookbook.readthedocs.io/items/Rebinning.html. The image shapes need not be integer multiples of each other, but in this regime the transformation will not be reversible, i.e. if a_orig = rebin_slice(rebin_slice(a,newshape), a.shape) then a_orig will not be everywhere equal to a (but it will be equal in most places). To rebin and conserve flux, use the pypeit.utils.rebinND() function (see below).

Parameters:
  • a (numpy.ndarray) – Image of any dimensionality and data type

  • newshape (tuple) – Shape of the new image desired. Dimensionality must be the same as a.

Returns:

same dtype as input Image with same values as a rebinning to shape newshape

Return type:

numpy.ndarray

pypeit.utils.recursive_update(d, u)[source]

Update dictionary values with recursion to nested dictionaries.

Thanks to: https://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth

Parameters:
  • d (dict) – Dictionary (potentially of other dictionaries) to be updated. This is both edited in-place and returned.

  • u (dict) – Dictionary (potentially of other dictionaries) with the updated/additional values.

Returns:

The updated dictionary.

Return type:

dict

pypeit.utils.replace_bad(frame, bpm)[source]

Find all bad pixels, and replace the bad pixels with the nearest good pixel

Parameters:
  • frame (numpy.ndarray) – A frame that contains bad pixels that need to be replaced by the nearest good pixel

  • bpm (numpy.ndarray) – Boolean array (same shape as frame) indicating bad pixel values (bad=True) that need to be replaced.

Returns:

_frame – A direct copy of the input frame, with the bad pixels replaced by the nearest good pixels.

Return type:

numpy.ndarray

pypeit.utils.robust_meanstd(array)[source]

Determine a robust measure of the mean and dispersion of array

Parameters:

array (numpy.ndarray) – an array of values

Returns:

Median of the array and a robust estimate of the standand deviation (assuming a symmetric distribution).

Return type:

tuple

pypeit.utils.save_pickle(fname, obj)[source]

Save an object to a python pickle file

Parameters:
  • fname (str) – Filename

  • obj (object) – An object suitable for pickle serialization.

pypeit.utils.setup_list_to_arr_setup(setup_list, norders, nexps)[source]

This utility routine converts an setup_list list to an arr_setup list. The arr_setup list and setup_lists are defined as follows, for e.g. echelle wavelengths waves. See core.coadd.coadd1d.ech_combspec for further details.

  • arr_setup is a list of length nsetups, one for each setup. Each element is a numpy array with shape = (nspec, norder, nexp), which is the data model for echelle spectra for an individual setup. The utiltities arr_setup_to_setup_list() and setup_list_to_arr() convert between arr_setup and setup_list.

  • setup_list is a list of length nsetups, one for each setup. Each element is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) , e.g., wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number. The utility functions echarr_to_echlist() and echlist_to_echarr() convert between the multi-dimensional numpy arrays in the arr_setup and the lists of numpy arrays in setup_list.

Parameters:
  • setup_list (list) – List of length nsteups. Each element of the setup list is a list of length norder*nexp elements, each of which contains the shape = (nspec1,) wavelength arrays for the order/exposure in setup1. The list is arranged such that the nexp1 spectra for iorder=0 appear first, then come nexp1 spectra for iorder=1, i.e. the outer or fastest varying dimension in python array ordering is the exposure number.

  • norders (list) – List containing the number of orders for each setup.

  • nexps (list) – List containing the number of exposures for each setup

Returns:

arr_setup – List of length nsetups each element of which is a numpy array of shape=(nspec, norders, nexp) which is the echelle spectra data model.

Return type:

list

pypeit.utils.setup_list_to_concat(lst)[source]

Unravel a list of lists.

Parameters:

lst (list) – List to unravel.

Returns:

concat_list – A list of the elements of the input list, unraveled.

Return type:

list

pypeit.utils.smooth(x, window_len, window='flat')[source]

smooth the data using a window with requested size.

This method is based on the convolution of a scaled window with the signal. The signal is prepared by introducing reflected copies of the signal (with the window size) in both ends so that edge effects are minimize at the beginning and end part of the signal.

This code taken from this cookbook and slightly modified: https://scipy-cookbook.readthedocs.io/items/SignalSmooth.html

Todo

the window parameter could be the window itself if an array instead of a string

Parameters:
  • x (numpy.ndarray) – the input signal

  • window_len (int) – the dimension of the smoothing window; should be an odd integer

  • window (str, optional) – the type of window from ‘flat’, ‘hanning’, ‘hamming’, ‘bartlett’, ‘blackman’ flat window will produce a moving average smoothing. Default is ‘flat’.

Returns:

the smoothed signal, same shape as x

Return type:

numpy.ndarray

Examples

>>> t=linspace(-2,2,0.1)
>>> x=sin(t)+randn(len(t))*0.1
>>> y=smooth(x)

Notes

  • See also: numpy.hanning, numpy.hamming, numpy.bartlett, numpy.blackman, numpy.convolve scipy.signal.lfilter

  • length(output) != length(input), to correct this, return y[(window_len/2-1):-(window_len/2)] instead of just y.

pypeit.utils.spec_atleast_2d(wave, flux, ivar, gpm, log10_blaze_function=None, copy=False)[source]

Force spectral arrays to be 2D.

Input and output spectra are ordered along columns; i.e., the flux vector for the first spectrum is in flux[:,0].

Parameters:
  • wave (numpy.ndarray) – Wavelength array. Must be 1D if the other arrays are 1D. If 1D and the other arrays are 2D, the wavelength vector is assumed to be the same for all spectra.

  • flux (numpy.ndarray) – Flux array. Can be 1D or 2D.

  • ivar (numpy.ndarray) – Inverse variance array for the flux. Shape must match flux.

  • gpm (numpy.ndarray) – Good pixel mask (i.e., True=Good). Shape must match flux.

  • copy (bool, optional) – If the flux, inverse variance, and gpm arrays are already 2D on input, the function just returns the input arrays. This flag forces the returned arrays to be copies instead.

Returns:

Returns 7 objects. The first four are the reshaped wavelength, flux, inverse variance, and gpm arrays. Next is the log10_blaze_function, which is None if not provided as an input argument. The next two give the length of each spectrum and the total number of spectra; i.e., the last two elements are identical to the shape of the returned flux array.

Return type:

tuple

Raises:

PypeItError – Raised if the shape of the input objects are not appropriately matched.

pypeit.utils.string_table(tbl, delimeter='print', has_header=True)[source]

Provided the array of data, format it with equally spaced columns and add a header (first row) and contents delimeter.

Parameters:
  • tbl (numpy.ndarray) – Array of string representations of the data to print.

  • delimeter (str, optional) – If the first row in the table containts the column headers (see has_header), this sets the delimeter between first table row and the column data. Use 'print' for a simple line of hyphens, anything else results in an rst style table formatting.

  • has_header (bool, optional) – The first row in tbl contains the column headers.

Returns:

Single long string with the data table.

Return type:

str

pypeit.utils.subsample(frame)[source]

Used by LACosmic

Parameters:

frame (numpy.ndarray) – Array of data to subsample.

Returns:

Sliced image

Return type:

numpy.ndarray

pypeit.utils.to_string(data, use_repr=True, verbatim=False)[source]

Convert a single datum into a string

Simply return strings, recursively convert the elements of any objects with a __len__ attribute, and use the object’s own __repr__ attribute for all other objects.

Parameters:
  • data (object) – The object to stringify.

  • use_repr (bool, optional) – Use the objects __repr__ method; otherwise, use a direct string conversion.

  • verbatim (bool, optional) – Use quotes around the provided string to indicate that the string should be represented in a verbatim (fixed width) font.

Returns:

A string representation of the provided data.

Return type:

str

pypeit.utils.yamlify(obj, debug=False)[source]

Recursively process an object so it can be serialised for yaml.

Based on jsonify in linetools.

Also found in desiutils

Note

All string-like keys in dict s are converted to str.

Parameters:
  • obj (object) – Any object.

  • debug (bool, optional) – Print extra information if requested.

Returns:

obj – An object suitable for yaml serialization. For example numpy.ndarray is converted to list, numpy.int64 is converted to int, etc.

Return type:

object

pypeit.utils.zero_not_finite(array)[source]

Set the elements of an array to zero which are inf or nan

Parameters:

array (numpy.ndarray) – An numpy array of arbitrary shape that potentially has nans or infinities.

Returns:

new_array – A copy of the array with the nans and infinities set to zero.

Return type:

numpy.ndarray