pylablib.core.dataproc package

Submodules

pylablib.core.dataproc.callable module

class pylablib.core.dataproc.callable.ICallable[source]

Bases: object

Fit function generalization.

Has a set of mandatory argument with no default values and a set of parameters with default values (there may or may not be an explicit list of them).

All the arguments are passed explicitly by name. Passed value supersede default values. Extra arguments (not used in the calculations) are ignored.

Assumed (but not enforced) to be immutable: changes after creation can break the behavior.

Implements (possibly; depends on subclasses) call namelist binding boosting: if the function is to be called many times with the same parameter names list, one can first bind parameters list, and then call bound function with the corresponding arguments. This way, callable(**p) should be equivalent to callable.bind(p.keys())(*p.values()).

has_arg(arg_name)[source]

Determine if the function has an argument arg_name (of all 3 categories)

filter_args_dict(args)[source]

Filter argument names dictionary to leave only the arguments that are used

get_mandatory_args()[source]

Return list of mandatory arguments (these are the ones without default values)

is_mandatory_arg(arg_name)[source]

Check if the argument arg_name is mandatory

get_arg_default(arg_name)[source]

Return default value of the argument arg_name.

Raise KeyError if the argument is not defined or ValueError if it has no default value.

bind(arg_names, **bound_params)[source]

Bind function to a given parameters set, leaving arg_names as free parameters (in the given order)

class NamesBoundCall(func, names, bound_params)[source]

Bases: object

bind_namelist(arg_names, **bound_params)[source]

Bind namelist to boost subsequent calls.

Similar to bind(arg_names), but bound function doesn’t accept additional parameters and can be boosted.

class pylablib.core.dataproc.callable.MultiplexedCallable(func, multiplex_by, join_method='stack')[source]

Bases: pylablib.core.dataproc.callable.ICallable

Multiplex a single callable based on a single parameter.

If the function is called with this parameter as an iterable, then the underlying callable will be called for each value of the parameter separately, and the results will be joined into a single array (if return the values are scalar, they’re joined in 1D array; otherwise, they’re joined using join_method).

Parameters:
  • func (callable) – Function to be parallelized.
  • multiplex_by (str) – Name of the argument to be multiplexed by.
  • join_method (str) – Method for combining individual results together if they’re non-scalars. Can be either 'list' (combine the results in a single list), 'stack' (combine using numpy.column_stack(), i.e., add dimension to the result), or 'concatenate' (concatenate the return values; the dimension of the result stays the same).

Multiplexing also makes use of call signatures for underlying function even if __call__ is used.

Note that this operation is slow, and should be used only for high-dimensional multiplexing; for 1D case it’s much better to just use numpy arrays as arguments and rely on numpy parallelizing.

has_arg(arg_name)[source]

Determine if the function has an argument arg_name (of all 3 categories)

get_mandatory_args()[source]

Return list of mandatory arguments (these are the ones without default values)

get_arg_default(arg_name)[source]

Return default value of the argument arg_name.

Raise KeyError if the argument is not defined or ValueError if it has no default value.

class NamesBoundCall(func, names, bound_params)[source]

Bases: object

bind(arg_names, **bound_params)

Bind function to a given parameters set, leaving arg_names as free parameters (in the given order)

bind_namelist(arg_names, **bound_params)

Bind namelist to boost subsequent calls.

Similar to bind(arg_names), but bound function doesn’t accept additional parameters and can be boosted.

filter_args_dict(args)

Filter argument names dictionary to leave only the arguments that are used

is_mandatory_arg(arg_name)

Check if the argument arg_name is mandatory

class pylablib.core.dataproc.callable.JoinedCallable(funcs, join_method='stack')[source]

Bases: pylablib.core.dataproc.callable.ICallable

Join several callables sharing the same arguments list.

The results will be joined into a single array (if return the values are scalar, they’re joined in 1D array; otherwise, they’re joined using join_method).

Parameters:
  • funcs ([callable]) – List of functions to be joined together.
  • join_method (str) – Method for combining individual results together if they’re non-scalars. Can be either 'list' (combine the results in a single list), 'stack' (combine using numpy.column_stack(), i.e., add dimension to the result), or 'concatenate' (concatenate the return values; the dimension of the result stays the same).
has_arg(arg_name)[source]

Determine if the function has an argument arg_name (of all 3 categories)

get_mandatory_args()[source]

Return list of mandatory arguments (these are the ones without default values)

get_arg_default(arg_name)[source]

Return default value of the argument arg_name.

Raise KeyError if the argument is not defined or ValueError if it has no default value.

class NamesBoundCall(func, names, bound_params)[source]

Bases: object

bind(arg_names, **bound_params)

Bind function to a given parameters set, leaving arg_names as free parameters (in the given order)

bind_namelist(arg_names, **bound_params)

Bind namelist to boost subsequent calls.

Similar to bind(arg_names), but bound function doesn’t accept additional parameters and can be boosted.

filter_args_dict(args)

Filter argument names dictionary to leave only the arguments that are used

is_mandatory_arg(arg_name)

Check if the argument arg_name is mandatory

class pylablib.core.dataproc.callable.FunctionCallable(func, function_signature=None, defaults=None, alias=None)[source]

Bases: pylablib.core.dataproc.callable.ICallable

Callable based on a function or a method.

Parameters:
  • func – Function to be wrapped.
  • function_signature – A functions.FunctionSignature object supplying information about function’s argument names and default values, if they’re different from what’s extracted from its signature.
  • defaults (dict) – A dictionary {name: value} of additional default parameters values. Override the defaults from the signature. All default values must be pass-able to the function as a parameter
  • alias (dict) – A dictionary {alias: original} for renaming some of the original arguments. Original argument names can’t be used if aliased (though, multi-aliasing can be used explicitly, e.g., alias={'alias':'arg','arg':'arg'}). A name can be blocked (its usage causes error) if it’s aliased to None (alias={'blocked_name':None}).

Optional non-named arguments in the form *args are not supported, since all the arguments are passed to the function by keywords.

Optional named arguments in the form **kwargs are supported only if their default values are explicitly provided in defaults (otherwise it would be unclear whether argument should be added into **kwargs or ignored altogether).

has_arg(arg_name)[source]

Determine if the function has an argument arg_name (of all 3 categories)

get_mandatory_args()[source]

Return list of mandatory arguments (these are the ones without default values)

get_arg_default(arg_name)[source]

Return default value of the argument arg_name.

Raise KeyError if the argument is not defined or ValueError if it has no default value.

class NamesBoundCall(func, names, bound_params)[source]

Bases: object

bind(arg_names, **bound_params)

Bind function to a given parameters set, leaving arg_names as free parameters (in the given order)

bind_namelist(arg_names, **bound_params)

Bind namelist to boost subsequent calls.

Similar to bind(arg_names), but bound function doesn’t accept additional parameters and can be boosted.

filter_args_dict(args)

Filter argument names dictionary to leave only the arguments that are used

is_mandatory_arg(arg_name)

Check if the argument arg_name is mandatory

class pylablib.core.dataproc.callable.MethodCallable(method, function_signature=None, defaults=None, alias=None)[source]

Bases: pylablib.core.dataproc.callable.FunctionCallable

Similar to FunctionCallable, but accepts class method instead of a function.

The only addition is that now object’s attributes can also parameters to the function: all the parameters which are not explicitly mentioned in the method signature are assumed to be object’s attributes.

The parameters are affected by alias, but NOT affected by defaults (since it’s impossible to ensure that all object’s attributes are kept constant, and it’s impractical to reset them all to default values at every function call).

Parameters:
  • method – Method to be wrapped.
  • function_signature – A functions.FunctionSignature object supplying information about function’s argument names and default values, if they’re different from what’s extracted from its signature. If it’s assumed that the first self argument is already excluded.
  • defaults (dict) – A dictionary {name: value} of additional default parameters values. Override the defaults from the signature. All default values must be pass-able to the function as a parameter
  • alias (dict) – A dictionary {alias: original} for renaming some of the original arguments. Original argument names can’t be used if aliased (though, multi-aliasing can be used explicitly, e.g., alias={'alias':'arg','arg':'arg'}). A name can be blocked (its usage causes error) if it’s aliased to None (alias={'blocked_name':None}).

This callable is implemented largely to be used with TheoryCalculator class (currently deprecated).

has_arg(arg_name)[source]

Determine if the function has an argument arg_name (of all 3 categories)

get_arg_default(arg_name)[source]

Return default value of the argument arg_name.

Raise KeyError if the argument is not defined or ValueError if it has no default value.

class NamesBoundCall(func, names, bound_params)[source]

Bases: object

bind(arg_names, **bound_params)

Bind function to a given parameters set, leaving arg_names as free parameters (in the given order)

bind_namelist(arg_names, **bound_params)

Bind namelist to boost subsequent calls.

Similar to bind(arg_names), but bound function doesn’t accept additional parameters and can be boosted.

filter_args_dict(args)

Filter argument names dictionary to leave only the arguments that are used

get_mandatory_args()

Return list of mandatory arguments (these are the ones without default values)

is_mandatory_arg(arg_name)

Check if the argument arg_name is mandatory

pylablib.core.dataproc.callable.to_callable(func)[source]

Convert a function to an ICallable instance.

If it’s already ICallable, return unchanged. Otherwise, return FunctionCallable or MethodCallable depending on whether it’s a function or a bound method.

pylablib.core.dataproc.feature module

Traces feature detection: peaks, baseline, local extrema.

class pylablib.core.dataproc.feature.Baseline[source]

Bases: pylablib.core.dataproc.feature.Baseline

Baseline (background) for a trace.

position is the background level, and width is its noise width.

count()

Return number of occurrences of value.

index()

Return first index of value.

Raises ValueError if the value is not present.

position
width
pylablib.core.dataproc.feature.get_baseline_simple(trace, find_width=True)[source]

Get the baseline of the 1D trace.

If find_width==True, calculate its width as well.

pylablib.core.dataproc.feature.subtract_baseline(trace)[source]

Subtract baseline from the trace (make its background zero).

class pylablib.core.dataproc.feature.Peak[source]

Bases: pylablib.core.dataproc.feature.Peak

A trace peak.

kernel defines its shape (for, e.g., generation purposes).

count()

Return number of occurrences of value.

height
index()

Return first index of value.

Raises ValueError if the value is not present.

kernel
position
width
pylablib.core.dataproc.feature.find_peaks_cutoff(trace, cutoff, min_width=0, kind='peak', subtract_bl=True)[source]

Find peaks in the data using cutoff.

Parameters:
  • trace – 1D data array.
  • cutoff (float) – Cutoff value for the peak finding.
  • min_width (int) – Minimal uninterrupted width (in datapoints) of a peak. Any peaks this width are ignored.
  • kind (str) – Peak kind. Can be 'peak' (positive direction), 'dip' (negative direction) or 'both' (both directions).
  • subtract_bl (bool) – If True, subtract baseline of the trace before checking cutoff.
Returns:

List of Peak objects.

pylablib.core.dataproc.feature.rescale_peak(peak, xoff=0.0, xscale=1.0, yoff=0, yscale=1.0)[source]

Rescale peak’s position, width and height.

xscale rescales position and width, xoff shifts position, yscale and yoff affect peak height.

pylablib.core.dataproc.feature.peaks_sum_func(peaks, peak_func='lorentzian')[source]

Create a function representing sum of peaks.

peak_func determines default peak kernel (used if peak.kernel=="generic"). Kernel is either a name string or a function taking 3 arguments (x, width, height).

pylablib.core.dataproc.feature.get_kernel(width, kernel_width=None, kernel='lorentzian')[source]

Get a finite-sized kernel.

Return 1D array of length 2*kernel_width+1 containing the given kernel. By default, kernel_width=int(width*3).

pylablib.core.dataproc.feature.get_peakdet_kernel(peak_width, background_width, kernel_width=None, kernel='lorentzian')[source]

Get a peak detection kernel.

Return 1D array of length 2*kernel_width+1 containing the kernel. The kernel is a sum of narrow positive peak (with the width peak_width) and a broad negative peak (with the width background_width); both widths are specified in datapoints (index). Each peak is normalized to have unit sum, i.e., the kernel has zero total sum. By default, kernel_width=int(background_width*3).

pylablib.core.dataproc.feature.multi_scale_peakdet(trace, widths, background_ratio, kind='peak', norm_ratio=None, kernel='lorentzian')[source]

Detect multiple peak widths using get_peakdet_kernel() kernel.

Parameters:
  • trace – 1D data array.
  • widths ([float]) – Array of possible peak widths.
  • background_ratio (float) – ratio of the background_width to the peak_width in get_peakdet_kernel().
  • kind (str) – Peak kind. Can be 'peak' (positive direction) or 'dip' (negative direction).
  • norm_ratio (float) – if not None, defines the width of the “normalization region” (in units of the kernel width, same as for the background kernel); it is then used to calculate a local trace variance to normalize the peaks magnitude.
  • kernel – Peak matching kernel.
Returns:

Filtered trace which shows peak ‘affinity’ at each point.

pylablib.core.dataproc.feature.find_local_extrema(wf, region_width=3, kind='max', min_distance=None)[source]

Find local extrema (minima or maxima) of 1D trace.

kind can be "min" or "max" and determines the kind of the extrema. Local minima (maxima) are defined as points which are smaller (greater) than all other points in the region of width region_width around it. region_width is always round up to an odd integer. min_distance defines the minimal distance between the extrema (region_width//2 by default). If there are several extrema within min_distance, their positions are averaged together.

pylablib.core.dataproc.feature.latching_trigger(wf, threshold_on, threshold_off, init_state='undef', result_kind='separate')[source]

Determine indices of rise and fall trigger events with hysteresis (latching) thresholds.

Return either two arrays (rise_trig, fall_trig) containing trigger indices (if result_kind=="separate"), or a single array of tuples [(dir,pos)], where dir is the trigger direction (+1 or -1) and pos is its index (if result_kind=="joined"). Triggers happen when a state switch from ‘high’ to ‘low’ (rising) or vice versa (falling). The state switches from ‘low’ to ‘high’ when the trace value goes above threshold_on, and from ‘high’ to ‘low’ when the trace value goes below threshold_off. For a stable hysteresis effect, threshold_on should be larger than threshold_off, which means that the trace values between these two thresholds can not change the state. init_state specifies the initial state: "low", "high", or "undef" (undefined state).

pylablib.core.dataproc.filters module

Routines for filtering arrays (mostly 1D data).

pylablib.core.dataproc.filters.convolve1d(trace, kernel, mode='reflect', cval=0.0)[source]

Convolution filter.

Convolves trace with the given kernel (1D array). mode and cval determine how the endpoints are handled. Simply a wrapper around the standard scipy.ndimage.convolve1d() that handles complex arguments.

pylablib.core.dataproc.filters.convolution_filter(a, width, kernel='gaussian', kernel_span='auto', mode='reflect', cval=0.0, kernel_height=None)[source]

Convolution filter.

Parameters:
  • a – array for filtering.
  • width (float) – kernel width (second parameter to the kernel function).
  • kernel – either a string defining the kernel function (see specfunc.get_kernel_func() for possible kernels), or a function taking 3 arguments (pos, width, height), where height can be None (assumes normalization by area).
  • kernel_span – the cutoff for the kernel function. Either an integer (number of points) or 'auto' (autodetect for "gaussian", "rectangle" and "exp_decay", full trace width for all other kernels).
  • mode (str) – convolution mode (see scipy.ndimage.convolve()).
  • cval (float) – convolution fill value (see scipy.ndimage.convolve()).
  • kernel_height – height parameter to be passed to the kernel function. None means normalization by area.
pylablib.core.dataproc.filters.gaussian_filter(a, width, mode='reflect', cval=0.0)[source]

Simple gaussian filter. Can handle complex data.

Equivalent to a convolution with a gaussian. Equivalent to scipy.ndimage.gaussian_filter1d(), uses convolution_filter().

pylablib.core.dataproc.filters.gaussian_filter_nd(a, width, mode='reflect', cval=0.0)[source]

Simple gaussian filter. Can’t handle complex data.

Equivalent to a convolution with a gaussian. Wrapper around scipy.ndimage.gaussian_filter().

pylablib.core.dataproc.filters.low_pass_filter(trace, t, mode='reflect', cval=0.0)[source]

Simple single-pole low-pass filter.

t is the filter time constant, mode and cval are the trace expansion parameters (only from the left). Implemented as a recursive digital filter, so its performance doesn’t depend strongly on t. Works only for 1D arrays.

pylablib.core.dataproc.filters.high_pass_filter(trace, t, mode='reflect', cval=0.0)[source]

Simple single-pole high-pass filter (equivalent to subtracting a low-pass filter).

t is the filter time constant, mode and cval are the trace expansion parameters (only from the left). Implemented as a recursive digital filter, so its performance doesn’t depend strongly on t. Works only for 1D arrays.

pylablib.core.dataproc.filters.integrate(trace)[source]

Calculate the integral of the trace.

Alias for numpy.cumsum().

pylablib.core.dataproc.filters.differentiate(trace)[source]

Calculate the differential of the trace.

Note that since the data dimensions are changed (length is reduced by 1), the index is not preserved for pandas DataFrames.

pylablib.core.dataproc.filters.sliding_average(a, width, mode='reflect', cval=0.0)[source]

Simple sliding average filter

Equivalent to convolution with a rectangle peak function.

pylablib.core.dataproc.filters.median_filter(a, width, mode='reflect', cval=0.0)[source]

Median filter.

Wrapper around scipy.ndimage.median_filter().

pylablib.core.dataproc.filters.sliding_filter(trace, n, dec='bin', mode='reflect', cval=0.0)[source]

Perform sliding filtering on the data.

Parameters:
  • trace – 1D array-like object.
  • n (int) – bin width.
  • dec (str) – decimation method. Can be - 'bin' or 'mean' - do a binning average; - 'sum' - sum points; - 'min' - leave min point; - 'max' - leave max point; - 'median' - leave median point (works as a median filter). - a function which takes a single 1D array and compresses it into a number
  • mode (str) – Expansion mode. Can be 'constant' (added values are determined by cval), 'nearest' (added values are end values of the trace), 'reflect' (reflect trace with respect to its endpoint) or 'wrap' (wrap the values from the other size).
  • cval (float) – If mode=='constant', determines the expanded values.
pylablib.core.dataproc.filters.decimate(a, n, dec='skip', axis=0, mode='drop')[source]

Decimate the data.

Note that since the data dimensions are changed, the index is not preserved for pandas DataFrames.

Parameters:
  • a – data array.
  • n (int) – decimation factor.
  • dec (str) – decimation method. Can be - 'skip' - just leave every n’th point while completely omitting everything else; - 'bin' or 'mean' - do a binning average; - 'sum' - sum points; - 'min' - leave min point; - 'max' - leave max point; - 'median' - leave median point (works as a median filter). - a function which takes two arguments (nD numpy array and an axis) and compresses the array along the given axis
  • axis (int) – axis along which to perform the decimation; can also be a tuple, in which case the same decimation is performed sequentially along several axes.
  • mode (str) – determines what to do with the last bin if it’s incomplete. Can be either 'drop' (omit the last bin) or 'leave' (keep it).
pylablib.core.dataproc.filters.binning_average(a, width, axis=0, mode='drop')[source]

Binning average filter.

Equivalent to decimate() with dec=='bin'.

pylablib.core.dataproc.filters.decimate_full(a, dec='skip', axis=0)[source]

Completely decimate the data along a given axis.

Parameters:
  • a – data array.
  • dec (str) – decimation method. Can be - 'skip' - just leave every n’th point while completely omitting everything else; - 'bin' or 'mean' - do a binning average; - 'sum' - sum points; - 'min' - leave min point; - 'max' - leave max point; - 'median' - leave median point (works as a median filter). - a function which takes two arguments (nD numpy array and an axis) and compresses the array along the given axis
  • axis (int) – axis along which to perform the decimation; can also be a tuple, in which case the same decimation is performed along several axes.
pylablib.core.dataproc.filters.decimate_datasets(arrs, dec='mean')[source]

Decimate datasets with the same shape element-wise (works only for 1D or 2D arrays).

Note that the index data is taken from the first array in the list.

dec has the same values and meaning as in decimate(). The format of the output (numpy or pandas, and the name of columns in pandas DataFrame) is determined by the first array in the list.

pylablib.core.dataproc.filters.collect_into_bins(values, distance, preserve_order=False, to_return='value')[source]

Collect all values into bins separated at least by distance.

Return the extent of each bin. If preserve_order==False, values are sorted before splitting. If to_return="value", the extent is given in values; if to_return="index", it is given in indices (only useful if preserve_order=True, as otherwise the indices correspond to a sorted array). If distance is a tuple, then it denotes the minimal and the maximal separation between consecutive elements; otherwise, it is a single number denoting maximal absolute distance (i.e., it corresponds to a tuple (-distance,distance)).

pylablib.core.dataproc.filters.split_into_bins(values, max_span, max_size=None)[source]

Split values into bins of the span at most max_span and number of elements at most max_size.

If max_size is None, it’s assumed to be infinite. Return array of indices for each bin. Values are sorted before splitting.

pylablib.core.dataproc.filters.fourier_filter(trace, response, dt=1, preserve_real=True)[source]

Apply filter to a trace in the frequency domain.

response is a (possibly) complex function with single 1D real numpy array as a frequency argument. dt specifies time step between consecutive points. Note that in case of a multi-column data the filter is applied column-wise; this is in contrast with the Fourier transform methods, which would assume the first column to be times.

If preserve_real==True, then the response for negative frequencies is automatically taken to be complex conjugate of the response for positive frequencies (so that the real trace stays real).

pylablib.core.dataproc.filters.fourier_make_response_real(response)[source]

Turn a frequency filter function into a real one (in the time domain).

Done by reflecting and complex conjugating positive frequency part to negative frequencies. response is a function with a single argument (frequency), return value is a modified function.

pylablib.core.dataproc.filters.fourier_filter_bandpass(pass_range_min, pass_range_max)[source]

Generate a bandpass filter function (hard cutoff).

The function is symmetric, so that it corresponds to a real response in time domain.

pylablib.core.dataproc.filters.fourier_filter_bandstop(stop_range_min, stop_range_max)[source]

Generate a bandstop filter function (hard cutoff).

The function is symmetric, so that it corresponds to a real response in time domain.

class pylablib.core.dataproc.filters.RunningDecimationFilter(n, mode='mean', on_incomplete='none')[source]

Bases: object

Running decimation filter.

Remembers last n samples and returns their averages, median, etc.

Parameters:
  • n – decimation length
  • mode – decimation mode ("mean", "median", "min", or "max")
  • on_incomplete – determines what to return while the filter window is not yet full; can be "none" (default, return None), or "partial" (operate on the partial accumulated data)
get()[source]

Get the filtered result

add(x)[source]

Add a new sample

reset()[source]

Reset the filter

class pylablib.core.dataproc.filters.RunningDebounceFilter(n, precision=None, initial=None)[source]

Bases: object

Running debounce filter.

“Sticks” to the current value and only switches when a new value remains constant (withing a given precision) for a given number of samples. Filters out temporary spikes and short changes, conceptually similar to a running median filter.

Parameters:
  • n – length of the required constant period
  • precision – comparison precision (None means that the values should be exactly equal)
  • initial – initial value; None means that the first sample sets this value
get()[source]

Get the filtered result

add(x)[source]

Add a new sample

reset()[source]

Reset the filter

pylablib.core.dataproc.fitting module

Universal function fitting interface.

class pylablib.core.dataproc.fitting.Fitter(func, xarg_name=None, fit_parameters=None, fixed_parameters=None, scale=None, limits=None, weights=None)[source]

Bases: object

Fitter object.

Can handle variety of different functions, complex arguments or return values, array arguments.

Parameters:
  • func (callable) – Fit function. Can be anything callable (function, method, object with __call__ method, etc.).
  • xarg_name (str or list) – Name (or multiple names) for x arguments. These arguments are passed to func (as named arguments) when calling for fitting. Can be a string (single argument) or a list (arbitrary number of arguments, including zero).
  • fit_parameters (dict) – Dictionary {name: value} of parameters to be fitted (value is the starting value for the fitting procedure). If value is None, try and get the default value from the func.
  • fixed_parameters (dict) – Dictionary {name: value} of parameters to be fixed during the fitting procedure. If value is None, try and get the default value from the func.
  • scale (dict) – Defines typical scale of fit parameters (used to normalize fit parameters supplied of scipy.optimize.least_squares()). Note: for complex parameters scale must also be a complex number, with re and im parts of the scale variable corresponding to the scale of the re and im part.
  • limits (dict) – Boundaries for the fit parameters (missing entries are assumed to be unbound). Each boundary parameter is a tuple (lower, upper). lower or upper can be None, numpy.nan or numpy.inf (with the appropriate sign), which implies no bounds in the given direction. Note: for compound data types (such as lists) the entries are still tuples of 2 elements, each of which is either None (no bound for any sub-element) or has the same structure as the full parameter. Note: for complex parameters limits must also be complex numbers (or None), with re and im parts of the limits variable corresponding to the limits of the re and im part.
  • weights (list or numpy.ndarray) – Determines the weights of y-points. Can be either an array broadcastable to y (e.g., a scalar or an array with the same shape as y), in which case it’s interpreted as list of individual point weights (which multiply residuals before they are squared). Or it can be an array with number of elements which is square of the number of elements in y, in which case it’s interpreted as a weights matrix (which matrix-multiplies residuals before they are squared).
set_xarg_name(xarg_name)[source]

Set names of x arguments.

Can be a string (single argument) or a list (arbitrary number of arguments, including zero).

use_xarg()[source]

Return True if the function requires x arguments

set_fixed_parameters(fixed_parameters)[source]

Change fixed parameters

update_fixed_parameters(fixed_parameters)[source]

Update the dictionary of fixed parameters

del_fixed_parameters(fixed_parameters)[source]

Remove fixed parameters

set_fit_parameters(fit_parameters)[source]

Change fit parameters

update_fit_parameters(fit_parameters)[source]

Update the dictionary of fit parameters

del_fit_parameters(fit_parameters)[source]

Remove fit parameters

fit(x=None, y=0, fit_parameters=None, fixed_parameters=None, scale='default', limits='default', weights=1.0, parscore=None, return_stderr=False, return_residual=False, **kwargs)[source]

Fit the data.

Parameters:
  • x – x arguments. If the function has single x argument, x is an array-like object; otherwise, x is a list of array-like objects (can be None if there are no x parameters).
  • y – Target function values.
  • fit_parameters (dict) – Adds to the default fit_parameters of the fitter (has priority on duplicate entries).
  • fixed_parameters (dict) – Adds to the default fixed_parameters of the fitter (has priority on duplicate entries).
  • scale (dict) – Defines typical scale of fit parameters (used to normalize fit parameters supplied of scipy.optimize.least_squares()). Note: for complex parameters scale must also be a complex number, with re and im parts of the scale variable corresponding to the scale of the re and im part. If value is "default", use the value supplied on the fitter creation (by default, no specific scales).
  • limits (dict) – Boundaries for the fit parameters (missing entries are assumed to be unbound). Each boundary parameter is a tuple (lower, upper). lower or upper can be None, numpy.nan or numpy.inf (with the appropriate sign), which implies no bounds in the given direction. Note: for compound data types (such as lists) the entries are still tuples of 2 elements, each of which is either None (no bound for any sub-element) or has the same structure as the full parameter. Note: for complex parameters limits must also be complex numbers (or None), with re and im parts of the limits variable corresponding to the limits of the re and im part. If value is "default", use the value supplied on the fitter creation (by default, no limits).
  • weights (list or numpy.ndarray) – Determines the weights of y-points. Can be either an array broadcastable to y (e.g., a scalar or an array with the same shape as y), in which case it’s interpreted as list of individual point weights (which multiply residuals before they are squared). Or it can be an array with number of elements which is square of the number of elements in y, in which case it’s interpreted as a weights matrix (which matrix-multiplies residuals before they are squared). If value is "default", use the value supplied on the fitter creation (by default, no weights)
  • parscore (callable) – parameter score function, whose value is added to the mean-square error (sum of all residuals squared) after applying weights. Takes the same parameters as the fit function, only without the x-arguments, and return an array-like value. Can be used for, e.g., ‘soft’ fit parameter constraining.
  • return_stderr (bool) – If True, append stderr to the output.
  • return_residual – If not False, append residual to the output.
  • **kwargs – arguments passed to scipy.optimize.least_squares() function.
Returns:

(params, bound_func[, stderr][, residual]):
  • params: a dictionary {name: value} of the parameters supplied to the function (both fit and fixed).
  • bound_func: the fit function with all the parameters bound (i.e., it only requires x parameters).
  • stderr: a dictionary {name: error} of standard deviation for fit parameters to the return parameters.
    If the fitting routine returns no residuals (usually for a bad or an under-constrained fit), all residuals are set to NaN.
  • residual: either a full array of residuals func(x,**params)-y (if return_residual=='full'),
    a mean magnitude of the residuals mean(abs(func(x,**params)-y)**2) (if return_residual==True or return_residual=='mean'), or the total residuals including weights mean(abs((func(x,**params)-y)*weights)**2) (if return_residual=='weighted').

Return type:

tuple

initial_guess(fit_parameters=None, fixed_parameters=None, return_stderr=False, return_residual=False)[source]

Return the initial guess for the fitting.

Parameters:
  • fit_parameters (dict) – Overrides the default fit_parameters of the fitter.
  • fixed_parameters (dict) – Overrides the default fixed_parameters of the fitter.
  • return_stderr (bool) – If True, append stderr to the output.
  • return_residual – If not False, append residual to the output.
Returns:

(params, bound_func).

  • params: a dictionary {name: value} of the parameters supplied to the function (both fit and fixed).
  • bound_func: the fit function with all the parameters bound (i.e., it only requires x parameters).
  • stderr: a dictionary {name: error} of standard deviation for fit parameters to the return parameters.
    Always zero, added for better compatibility with fit().
  • residual: either a full array of residuals func(x,**params)-y (if return_residual=='full') or
    a mean magnitude of the residuals mean(abs(func(x,**params)-y)**2) (if return_residual==True or return_residual=='mean'). Always zero, added for better compatibility with fit().

Return type:

tuple

pylablib.core.dataproc.fitting.huge_error(x, factor=100.0)[source]
pylablib.core.dataproc.fitting.get_best_fit(x, y, fits)[source]

Select the best (lowest residual) fit result.

x and y are the argument and the value of the bound fit function. fits is the list of fit results (tuples returned by Fitter.fit()).

pylablib.core.dataproc.fourier module

Routines for Fourier transform.

pylablib.core.dataproc.fourier.get_prev_len(l, maxprime=7)[source]

Get the largest number less or equal to l, which is composed of prime factors up to maxprime.

So far, only maxprime of 2, 3, 5, 7 and 11 are supported. maxprime of 5 gives less than 15% length reduction (and less than 6% for lengths above 400). maxprime of 11 gives less than 8% length reduction (and less than 4% for lengths above 300).

pylablib.core.dataproc.fourier.truncate_trace(trace, maxprime=7)[source]

Truncate trace length to the nearest smaller length which is composed of prime factors up to maxprime.

So far, only maxprime of 2, 3, 5, 7 and 11 are supported. maxprime of 5 gives less than 15% length reduction (and less than 6% for lengths above 400). maxprime of 11 gives less than 8% length reduction (and less than 4% for lengths above 300).

pylablib.core.dataproc.fourier.normalize_fourier_transform(ft, normalization='none', df=None, copy=False)[source]

Normalize the Fourier transform data.

ft is a 1D trace or a 2D array with 2 columns: frequency and complex amplitude. normalization can be 'none' (standard numpy normalization), 'sum' (the power sum is preserved: sum(abs(ft)**2)==sum(abs(trace)**2)), 'rms' (the power sum is equal to the trace RMS power: sum(abs(ft)**2)==mean(abs(trace)**2)), 'density' (power spectral density normalization, sum(abs(ft[:,1])**2)*df==mean(abs(trace[:,1])**2)), or 'dBc' (same as 'density', but normalized by the mean of the trace) If normalization=='density', then df can specify the frequency step between two consecutive bins; if df is None, it is extracted from the first two points of the frequency axis (or set to 1, if ft is a 1D trace)

pylablib.core.dataproc.fourier.apply_window(trace_values, window='rectangle', window_power_compensate=True)[source]

Apply FT window to the trace.

If window_power_compensate==True, multiply the data is multiplied by a compensating factor to preserve power in the spectrum.

pylablib.core.dataproc.fourier.fourier_transform(trace, dt=None, truncate=False, normalization='none', single_sided=False, window='rectangle', window_power_compensate=True, raw=False)[source]

Calculate a fourier transform of the trace.

Parameters:
  • trace – Time trace to be transformed. It can be a 1D trace of values, a 2-column trace, or a 3-column trace. If dt is None, then the first column is assumed to be time (only support uniform time step), and the other columns are either the trace values (for a single data column) or real and imaginary parts of the trace (for two data columns). If dt is not None, then the time column is assumed to be missing, so the two columns are assumed to be the real and the imaginary parts.
  • dt – if not None, can specify the time step between the consecutive samples, in which case it is assumed that the time column is missing from the trace; otherwise, try to get it from the time column of the trace if it exists, or set to 1 otherwise.
  • truncate (bool or int) – Determines whether to truncate the trace to the nearest product of small primes (speeds up FFT algorithm); can be False (no truncation), an integer 2, 3, 5, 7, or 11 (truncate to a product of primes up to and including this number), or True (default prime factorization, currently set to 7)
  • normalization (str) – Fourier transform normalization: - 'none': no (i.e., default numpy) normalization; - 'sum': the norm of the data is conserved (sum(abs(ft[:,1])**2)==sum(abs(trace[:,1])**2)); - 'rms': sum of the PSD is equal to the RMS trace amplitude squared (sum(abs(ft[:,1])**2)==mean(abs(trace[:,1])**2)); - 'density': power spectral density normalization, in x/rtHz (sum(abs(ft[:,1])**2)*df==mean(abs(trace[:,1])**2)); - 'dBc': like 'density', but normalized to the mean trace value.
  • single_sided (bool) – If True, only leave positive frequency side of the transform.
  • window (str) – FT window. Can be 'rectangle' (essentially, no window), 'hann' or 'hamming'.
  • window_power_compensate (bool) – If True, the data is multiplied by a compensating factor to preserve power in the spectrum.
  • raw (bool) – if True, return a simple 1D trace with the result.
Returns:

a two-column array of the same kind as the input, where the first column is frequency, and the second is complex FT data.

pylablib.core.dataproc.fourier.flip_fourier_transform(ft)[source]

Flip the fourier transform (analogous to making frequencies negative and flipping the order).

pylablib.core.dataproc.fourier.inverse_fourier_transform(ft, df=None, truncate=False, zero_loc=None, symmetric_time=False, raw=False)[source]

Calculate an inverse fourier transform of the trace.

Parameters:
  • ft – Fourier transform data to be inverted. It can be a 1D trace of values, a 2-column trace, or a 3-column trace. If df is None, then the first column is assumed to be frequency (only support uniform frequency step), and the other columns are either the trace values (for a single data column) or real and imaginary parts of the trace (for two data columns). If df is not None, then the frequency column is assumed to be missing, so the two columns are assumed to be the real and the imaginary parts.
  • df – if not None, can specify the frequency step between the consecutive samples; otherwise, try to get it from the frequency column of the trace if it exists, or set to 1 otherwise.
  • truncate (bool or int) – Determines whether to truncate the trace to the nearest product of small primes (speeds up FFT algorithm); can be False (no truncation), an integer 2, 3, 5, 7, or 11 (truncate to a product of primes up to and including this number), or True (default prime factorization, currently set to 7)
  • zero_loc (bool) – Location of the zero frequency point. Can be None (the one with the value of f-axis closest to zero, or the first point if the frequency column is missing), 'center' (mid-point), or an integer index.
  • symmetric_time (bool) – If True, make time axis go from (-0.5/df, 0.5/df) rather than (0, 1./df).
  • raw (bool) – if True, return a simple 1D trace with the result.
Returns:

a two-column array, where the first column is frequency, and the second is the complex-valued trace data.

pylablib.core.dataproc.fourier.power_spectral_density(trace, dt=None, truncate=False, normalization='density', single_sided=False, window='rectangle', window_power_compensate=True, raw=False)[source]

Calculate a power spectral density of the trace.

Parameters:
  • trace – Time trace to be transformed. It can be a 1D trace of values, a 2-column trace, or a 3-column trace. If dt is None, then the first column is assumed to be time (only support uniform time step), and the other columns are either the trace values (for a single data column) or real and imaginary parts of the trace (for two data columns). If dt is not None, then the time column is assumed to be missing, so the two columns are assumed to be the real and the imaginary parts.
  • dt – if not None, can specify the time step between the consecutive samples; otherwise, try to get it from the time column of the trace if it exists, or set to 1 otherwise.
  • truncate (bool or int) – Determines whether to truncate the trace to the nearest product of small primes (speeds up FFT algorithm); can be False (no truncation), an integer 2, 3, 5, 7, or 11 (truncate to a product of primes up to and including this number), or True (default prime factorization, currently set to 7)
  • normalization (str) – Fourier transform normalization: - 'none': no (i.e., default numpy) normalization; - 'sum': the norm of the data is conserved (sum(PSD[:,1])==sum(abs(trace[:,1])**2)); - 'rms': sum of the PSD is equal to the RMS trace amplitude squared (sum(PSD[:,1])==mean(abs(trace[:,1])**2)); - 'density': power spectral density normalization, in x/rtHz (sum(PSD[:,1])*df==mean(abs(trace[:,1])**2)); - 'dBc': like 'density', but normalized to the mean trace value.
  • single_sided (bool) – If True, only leave positive frequency side of the PSD.
  • window (str) – FT window. Can be 'rectangle' (essentially, no window), 'hann' or 'hamming'.
  • window_power_compensate (bool) – If True, the data is multiplied by a compensating factor to preserve power in the spectrum.
  • raw (bool) – if True, return a simple 1D trace with the result.
Returns:

a two-column array, where the first column is frequency, and the second is positive PSD.

pylablib.core.dataproc.fourier.get_real_part_ft(ft)[source]

Get the fourier transform of the real part only from the fourier transform of a complex variable.

pylablib.core.dataproc.fourier.get_imag_part_ft(ft)[source]

Get the fourier transform of the imaginary part only from the fourier transform of a complex variable.

pylablib.core.dataproc.fourier.get_correlations_ft(ft_a, ft_b, zero_mean=True, normalization='none')[source]

Calculate the correlation function of the two variables given their fourier transforms.

Parameters:
  • ft_a – first variable fourier transform
  • ft_b – second variable fourier transform
  • zero_mean (bool) – If True, the value corresponding to the zero frequency is set to zero (only fluctuations around means of a and b are calculated).
  • normalization (str) – Can be 'whole' (correlations are normalized by product of PSDs derived from ft_a and ft_b) or 'individual' (normalization is done for each frequency individually, so that the absolute value is always 1).

pylablib.core.dataproc.iir_transform module

Digital recursive infinite impulse response filter.

Implemented using Numba library (JIT high-performance compilation) if possible.

pylablib.core.dataproc.iir_transform.iir_apply_complex(trace, xcoeff, ycoeff)[source]

Apply digital, (possibly) recursive filter with coefficients xcoeff and ycoeff along the first axis.

Result is filtered signal y with y[n]=sum_j x[n-j]*xcoeff[j] + sum_k y[n-k-1]*ycoeff[k].

pylablib.core.dataproc.image module

pylablib.core.dataproc.image.convert_shape_indexing(shape, src, dst, axes=(0, 1))[source]

Convert image indexing style.

shape is the source image shape (2-tuple), src and dst are current format and desired format. Formats can be "rcb" (first index is row, second is column, rows count from the bottom), "rct" (same, but rows count from the top). "xyb" (first index is column, second is row, rows count from the bottom), or "xyt" (same but rows count form the top). "rc" is interpreted as "rct", "xy" as "xyt"

pylablib.core.dataproc.image.convert_image_indexing(img, src, dst, axes=(0, 1))[source]

Convert image indexing style.

img is the source image (ND numpy array with N>=2), src and dst are current format and desired format, axes specify correspondingly the row and the column axes (by default, the first two array axes). Formats can be "rcb" (first index is row, second is column, rows count from the bottom), "rct" (same, but rows count from the top). "xyb" (first index is column, second is row, rows count from the bottom), or "xyt" (same but rows count form the top). "rc" is interpreted as "rct", "xy" as "xyt"

class pylablib.core.dataproc.image.ROI(imin=0, imax=None, jmin=0, jmax=None)[source]

Bases: object

copy()[source]
center(shape=None)[source]
size(shape=None)[source]
area(shape=None)[source]
tup(shape=None)[source]
ispan(shape=None)[source]
jspan(shape=None)[source]
classmethod from_centersize(center, size, shape=None)[source]
classmethod intersect(*args)[source]
limit(shape)[source]
pylablib.core.dataproc.image.get_region(image, center, size, axis=(-2, -1))[source]

Get part of the image with the given center and size (both are tuples (i, j)).

The region is automatically reduced if a part of it is outside of the image.

pylablib.core.dataproc.image.get_region_sum(image, center, size, axis=(-2, -1))[source]

Sum part of the image with the given center and size (both are tuples (i, j)).

The region is automatically reduced if a part of it is outside of the image. Return tuple (sum, area), where area is the actual summer region are (in pixels).

pylablib.core.dataproc.interpolate module

pylablib.core.dataproc.interpolate.interpolate1D_func(x, y, kind='linear', axis=-1, copy=True, bounds_error=True, fill_values=nan, assume_sorted=False)[source]

1D interpolation.

Simply a wrapper around scipy.interpolate.interp1d.

Parameters:
  • x – 1D arrays of x coordinates for the points at which to find the values.
  • y – array of values corresponding to x points (can have more than 1 dimension, in which case the output values are (N-1)-dimensional)
  • kind – Interpolation method.
  • axis – axis in y-data over which to interpolate.
  • copy – if True, make internal copies of x and y.
  • bounds_error – if True, raise error if interpolation function arguments are outside of x bounds.
  • fill_values – values to fill the outside-bounds regions if bounds_error==False.
  • assume_sorted – if True, assume that data is sorted.
Returns:

A 1D array with interpolated data.

pylablib.core.dataproc.interpolate.interpolate1D(data, x, kind='linear', bounds_error=True, fill_values=nan, assume_sorted=False)[source]

1D interpolation.

Parameters:
  • data – 2-column array [(x,y)], where y is a function of x.
  • x – Arrays of x coordinates for the points at which to find the values.
  • kind – Interpolation method.
  • bounds_error – if True, raise error if x values are outside of data bounds.
  • fill_values – values to fill the outside-bounds regions if bounds_error==False
  • assume_sorted – if True, assume that data is sorted.
Returns:

A 1D array with interpolated data.

pylablib.core.dataproc.interpolate.interpolate2D(data, x, y, method='linear', fill_value=nan)[source]

Interpolate data in 2D.

Simply a wrapper around scipy.interpolate.griddata().

Parameters:
  • data – 3-column array [(x,y,z)], where z is a function of x and y.
  • x/y – Arrays of x and y coordinates for the points at which to find the values.
  • method – Interpolation method.
Returns:

A 2D array with interpolated data.

pylablib.core.dataproc.interpolate.interpolateND(data, xs, method='linear')[source]

Interpolate data in N dimensions.

Simply a wrapper around scipy.interpolate.griddata().

Parameters:
  • data(N+1)-column array [(x_1,..,x_N,y)], where y is a function of x_1, ... ,x_N.
  • xsN-tuple of arrays of coordinates for the points at which to find the values.
  • method – Interpolation method.
Returns:

An ND array with interpolated data.

pylablib.core.dataproc.interpolate.regular_grid_from_scatter(data, x_points, y_points, x_range=None, y_range=None, method='nearest')[source]

Turn irregular scatter-points data into a regular 2D grid function.

Parameters:
  • data – 3-column array [(x,y,z)], where z is a function of x and y.
  • x_points/y_points – Number of points along x/y axes.
  • x_range/y_range – If not None, a tuple specifying the desired range of the data (all points in data outside the range are excluded).
  • method – Interpolation method (see scipy.interpolate.griddata() for options).
Returns:

A nested tuple (data, (x_grid, y_grid)), where all entries are 2D arrays (either with data or with gridpoint locations).

pylablib.core.dataproc.interpolate.interpolate_trace(trace, step, rng=None, x_column=0, select_columns=None, kind='linear', assume_sorted=False)[source]

Interpolate trace data over a regular grid with the given step.

rng specifies interpolation range (by default, whole data range). x_column specifies column index for x-data. select_column specifies which columns to interpolate and keep at the output (by default, all data). If assume_sorted==True, assume that x-data is sorted. kind specifies interpolation method.

pylablib.core.dataproc.interpolate.average_interpolate_1D(data, step, rng=None, avg_kernel=1, min_weight=0, kind='linear')[source]

1D interpolation combined with pre-averaging.

Parameters:
  • data – 2-column array [(x,y)], where y is a function of x.
  • step – distance between the points in the interpolated data (all resulting x-coordinates are multiples of step).
  • rng – if not None, specifies interpolation range (by default, whole data range).
  • avg_kernel – kernel used for initial averaging. Can be either a 1D array, where each point corresponds to the relative bin weight, or an integer, which specifies simple rectangular kernel of the given width.
  • min_weight – minimal accumulated weight in the bin to consider it ‘valid’ (if the bin is invalid, its accumulated value is ignored, and its value is obtained by the interpolation step). min_weight of 0 implies any non-zero weight; otherwise, weight >=min_weight.
  • kind – Interpolation method.
Returns:

A 2-column array with the interpolated data.

pylablib.core.dataproc.specfunc module

Specific useful functions.

pylablib.core.dataproc.specfunc.gaussian_k(x, sigma=1.0, height=None)[source]

Gaussian kernel function.

Normalized by the area if height is None, otherwise height is the value at 0.

pylablib.core.dataproc.specfunc.rectangle_k(x, width=1.0, height=None)[source]

” Symmetric rectangle kernel function.

Normalized by the area if height is None, otherwise height is the value at 0.

pylablib.core.dataproc.specfunc.lorentzian_k(x, gamma=1.0, height=None)[source]

Lorentzian kernel function

Normalized by the area if height is None, otherwise height is the value at 0.

pylablib.core.dataproc.specfunc.complex_lorentzian_k(x, gamma=1.0, amplitude=1j)[source]

Complex Lorentzian kernel function.

pylablib.core.dataproc.specfunc.exp_decay_k(x, width=1.0, height=None, mode='causal')[source]

Exponential decay kernel function

Normalized by area if height=None (if possible), otherwise height is the value at 0.

Mode determines value for x<0:
  • 'causal' - it’s 0 for x<0;
  • 'step' - it’s constant for x<=0;
  • 'continue' - it’s a continuous decaying exponent;
  • 'mirror' - function is symmetric: exp(-|x|/width).
pylablib.core.dataproc.specfunc.get_kernel_func(kernel)[source]

Get a kernel function by its name.

Available functions are: 'gaussian', 'rectangle', 'lorentzian', 'exp_decay', 'complex_lorentzian'.

pylablib.core.dataproc.specfunc.rectangle_w(x, N, ft_compensated=False)[source]

Rectangle FT window function

pylablib.core.dataproc.specfunc.gen_hamming_w(x, N, alpha, beta, ft_compensated=False)[source]

Generalized Hamming FT window function.

If ft_compensated==True, multiply the window function by a compensating factor to preserve power in the spectrum.

pylablib.core.dataproc.specfunc.hann_w(x, N, ft_compensated=False)[source]

Hann FT window function.

If ft_compensated==True, multiply the window function by a compensating factor to preserve power in the spectrum.

pylablib.core.dataproc.specfunc.hamming_w(x, N, ft_compensated=False)[source]

Specific Hamming FT window function.

If ft_compensated==True, multiply the window function by a compensating factor to preserve power in the spectrum.

pylablib.core.dataproc.specfunc.get_window_func(window)[source]

Get a window function by its name.

Available functions are: 'hamming', 'rectangle', 'hann'.

pylablib.core.dataproc.specfunc.gen_hamming_w_ft(f, t, alpha, beta)[source]

Get Fourier Transform of a generalized Hamming FT window function.

f is the argument, t is the total window size.

pylablib.core.dataproc.specfunc.rectangle_w_ft(f, t)[source]

Get Fourier Transform of the rectangle FT window function.

f is the argument, t is the total window size.

pylablib.core.dataproc.specfunc.hann_w_ft(f, t)[source]

Get Fourier Transform of the Hann FT window function.

f is the argument, t is the total window size.

pylablib.core.dataproc.specfunc.hamming_w_ft(f, t)[source]

Get Fourier Transform of the specific Hamming FT window function.

f is the argument, t is the total window size.

pylablib.core.dataproc.specfunc.get_window_ft_func(window)[source]

Get a Fourier Transform of a window function by its name.

Available functions are: 'hamming', 'rectangle', 'hann'.

pylablib.core.dataproc.table_wrap module

Utilities for uniform treatment of pandas tables and numpy arrays for functions which can deal with them both.

class pylablib.core.dataproc.table_wrap.IGenWrapper(container)[source]

Bases: object

The interface for a wrapper that gives a uniform access to basic methods of wrapped objects’.

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the object copy; otherwise, just return the copy.

ndim()[source]
shape()[source]
class pylablib.core.dataproc.table_wrap.I1DWrapper(container)[source]

Bases: pylablib.core.dataproc.table_wrap.IGenWrapper

A wrapper containing a 1D object (a 1D numpy array or a pandas Series object).

Provides a uniform access to basic methods of a wrapped object.

class Accessor(wrapper)[source]

Bases: object

An accessor: creates a simple uniform interface to treat the wrapped object element-wise (get/set/iterate over elements).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

subcolumn(idx, wrapped=False)[source]

Return a subcolumn at index idx.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

static from_array(array, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a 1D numpy array or a list).

If force_copy==True, make a copy of supplied array. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns; only length-1 lists is supported).

column_names parameter is ignored. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

array_replaced(array, force_copy=False, preserve_index=False, wrapped=False)[source]

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

get_index()[source]

Get index of the given 1D trace, or None if none is available

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the object copy; otherwise, just return the copy.

ndim()[source]
shape()
class pylablib.core.dataproc.table_wrap.Array1DWrapper(container)[source]

Bases: pylablib.core.dataproc.table_wrap.I1DWrapper

A wrapper for a 1D numpy array.

Provides a uniform access to basic methods of a wrapped object.

get_deleted(idx, wrapped=False)[source]

Return a copy of the column with the data at index idx deleted.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_inserted(idx, val, wrapped=False)[source]

Return a copy of the column with the data val added at index idx.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

insert(idx, val)[source]

Add data val to index idx

get_appended(val, wrapped=False)[source]

Return a copy of the column with the data val appended at the end.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

append(val)[source]

Append data val to the end

subcolumn(idx, wrapped=False)[source]

Return a subcolumn at index idx.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

static from_array(array, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a 1D numpy array or a list).

If force_copy==True, make a copy of supplied array. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the object copy; otherwise, just return the copy.

class Accessor(wrapper)

Bases: object

An accessor: creates a simple uniform interface to treat the wrapped object element-wise (get/set/iterate over elements).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

array_replaced(array, force_copy=False, preserve_index=False, wrapped=False)

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns; only length-1 lists is supported).

column_names parameter is ignored. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_index()

Get index of the given 1D trace, or None if none is available

ndim()
shape()
class pylablib.core.dataproc.table_wrap.Series1DWrapper(container)[source]

Bases: pylablib.core.dataproc.table_wrap.I1DWrapper

A wrapper for a pandas Series object.

Provides a uniform access to basic methods of a wrapped object.

get_deleted(idx, wrapped=False)[source]

Return a copy of the column with the data at index idx deleted.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_inserted(idx, val, wrapped=False)[source]

Return a copy of the column with the data val added at index idx.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_appended(val, wrapped=False)[source]

Return a copy of the column with the data val appended at the end.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

subcolumn(idx, wrapped=False)[source]

Return a subcolumn at index idx.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

static from_array(array, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a 1D numpy array or a list).

If force_copy==True, make a copy of supplied array. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_index()[source]

Get index of the given 1D trace, or None if none is available

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the object copy; otherwise, just return the copy.

class Accessor(wrapper)

Bases: object

An accessor: creates a simple uniform interface to treat the wrapped object element-wise (get/set/iterate over elements).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

array_replaced(array, force_copy=False, preserve_index=False, wrapped=False)

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns; only length-1 lists is supported).

column_names parameter is ignored. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

ndim()
shape()
class pylablib.core.dataproc.table_wrap.I2DWrapper(container, r=None, c=None, t=None)[source]

Bases: pylablib.core.dataproc.table_wrap.IGenWrapper

A wrapper containing a 2D object (a 2D numpy array or a pandas DataFrame object).

Provides a uniform access to basic methods of a wrapped object.

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns).

column_names supplies names of the columns (only relevant for DataFrame2DWrapper). If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

columns_replaced(columns, preserve_index=False, wrapped=False)[source]

Return copy of the object with the data replaced by columns.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

static from_array(array, column_names=None, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a list of rows or a 2D numpy array).

column_names supplies names of the columns (only relevant for DataFrame2DWrapper). If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

array_replaced(array, preserve_index=None, force_copy=False, wrapped=False)[source]

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

get_index()[source]

Get index of the given 2D table, or None if none is available

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

column(idx, wrapped=False)[source]

Get a column at index idx.

Return a 1D numpy array for a 2D numpy array object, and an Series object for a pandas DataFrame. If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

subtable(idx, wrapped=False)[source]

Return a subtable at index idx.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

ndim()[source]
shape()
class pylablib.core.dataproc.table_wrap.Array2DWrapper(container)[source]

Bases: pylablib.core.dataproc.table_wrap.I2DWrapper

A wrapper for a 2D numpy array.

Provides a uniform access to basic methods of a wrapped object.

set_container(cont)[source]
class RowAccessor(wrapper, storage)[source]

Bases: object

A row accessor: creates a simple uniform interface to treat the wrapped object row-wise (append/insert/delete/iterate over rows).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

get_deleted(idx, wrapped=False)[source]

Return a new table with the rows at idx deleted.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

get_inserted(idx, val, wrapped=False)[source]

Return a new table with new rows given by val inserted at idx.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

insert(idx, val)[source]

Insert new rows given by val at index idx.

get_appended(val, wrapped=False)[source]

Return a new table with new rows given by val appended to the end of the table.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

append(val)[source]

Insert new rows given by val to the end of the table

class ColumnAccessor(wrapper, storage)[source]

Bases: object

A column accessor: creates a simple uniform interface to treat the wrapped object column-wise (append/insert/delete/iterate over columns).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

get_deleted(idx, wrapped=False)[source]

Return a new table with the columns at idx deleted.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

get_inserted(idx, val, wrapped=False)[source]

Return a new table with new columns given by val inserted at idx.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

insert(idx, val)[source]

Insert new columns given by val at index idx.

get_appended(val, wrapped=False)[source]

Return a new table with new columns given by val appended to the end of the table.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

append(val)[source]

Insert new columns given by val to the end of the table

set_names(names)[source]

Set column names (does nothing)

get_names()[source]

Get column names (all names are None)

get_column_index(idx)[source]

Get number index for a given column index

class TableAccessor(storage)[source]

Bases: object

A table accessor: accessing the table data through this interface returns an object of the appropriate type (numpy array for numpy wrapped object, and a DataFrame for a pandas DataFrame wrapped object).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

subtable(idx, wrapped=False)[source]

Return a subtable at index idx of the appropriate type (2D numpy array).

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

column(idx, wrapped=False)[source]

Get a column at index idx as a 1D numpy array.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns).

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table. column_names parameter is ignored.

static from_array(array, column_names=None, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a list of rows or a 2D numpy array).

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table. column_names parameter is ignored.

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

array_replaced(array, preserve_index=None, force_copy=False, wrapped=False)

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

columns_replaced(columns, preserve_index=False, wrapped=False)

Return copy of the object with the data replaced by columns.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

get_index()

Get index of the given 2D table, or None if none is available

ndim()
shape()
class pylablib.core.dataproc.table_wrap.DataFrame2DWrapper(container)[source]

Bases: pylablib.core.dataproc.table_wrap.I2DWrapper

A wrapper for a pandas DataFrame object.

Provides a uniform access to basic methods of a wrapped object.

class RowAccessor(wrapper, storage)[source]

Bases: object

A row accessor: creates a simple uniform interface to treat the wrapped object row-wise (append/insert/delete/iterate over rows).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

get_deleted(idx, wrapped=False)[source]

Return a copy of the column with the data at index idx deleted.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

get_inserted(idx, val, wrapped=False)[source]

Return a new table with new rows given by val inserted at idx.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

insert(idx, val)[source]

Insert new rows given by val at index idx.

get_appended(val, wrapped=False)[source]

Return a new table with new rows given by val appended to the end of the table.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

append(val)[source]

Insert new rows given by val to the end of the table

class ColumnAccessor(wrapper, storage)[source]

Bases: object

A column accessor: creates a simple uniform interface to treat the wrapped object column-wise (append/insert/delete/iterate over columns).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

get_deleted(idx, wrapped=False)[source]

Return a new table with the columns at idx deleted.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

get_inserted(idx, val, column_name=None, wrapped=False)[source]

Return a new table with new columns given by val inserted at idx.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

insert(idx, val, column_name=None)[source]

Insert new columns given by val at index idx

get_appended(val, column_name=None, wrapped=False)[source]

Return a new table with new columns given by val appended to the end of the table.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

append(val, column_name=None)[source]

Insert new columns given by val to the end of the table

set_names(names)[source]

Set column names

get_names()[source]

Get column names

get_column_index(idx)[source]

Get number index for a given column index

class TableAccessor(storage)[source]

Bases: object

A table accessor: accessing the table data through this interface returns an object of the appropriate type (numpy array for numpy wrapped object, and a DataFrame for a pandas DataFrame wrapped object).

Generated automatically for each table on creation, doesn’t need to be created explicitly.

subtable(idx, wrapped=False)[source]

Return a subtable at index idx of the appropriate type (pandas DataFrame).

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

column(idx, wrapped=False)[source]

Get a column at index idx as a pandas Series object.

If wrapped==True, return a new wrapper containing the column; otherwise, just return the column.

classmethod from_columns(columns, column_names=None, index=None, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied columns (a list of columns).

column_names supplies names of the columns (only relevant for DataFrame2DWrapper). If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

static from_array(array, column_names=None, index=None, force_copy=False, wrapped=False)[source]

Build a new object of the type corresponding to the wrapper from the supplied array (a list of rows or a 2D numpy array).

column_names supplies names of the columns (only relevant for DataFrame2DWrapper). If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

get_index()[source]

Get index of the given 2D table, or None if none is available

get_type()[source]

Get a string representing the wrapped object type

copy(wrapped=False)[source]

Copy the object. If wrapped==True, return a new wrapper containing the table; otherwise, just return the table

array_replaced(array, preserve_index=None, force_copy=False, wrapped=False)

Return a copy of the column with the data replaced by array.

All of the parameters are the same as in from_array().

columns_replaced(columns, preserve_index=False, wrapped=False)

Return copy of the object with the data replaced by columns.

If wrapped==True, return a new wrapper containing the table; otherwise, just return the table.

ndim()
shape()
pylablib.core.dataproc.table_wrap.wrap1d(container)[source]

Wrap a 1D container (a 1D numpy array or or a pandas Series) into an appropriate wrapper

pylablib.core.dataproc.table_wrap.wrap2d(container)[source]

Wrap a 2D container (a 2D numpy array or a pandas DataFrame) into an appropriate wrapper

pylablib.core.dataproc.table_wrap.wrap(container)[source]

Wrap container (a numpy array, a pandas Series or a pandas DataFrame) into an appropriate wrapper

pylablib.core.dataproc.transform module

class pylablib.core.dataproc.transform.LinearTransform(tmatr=None, shift=None, ndim=2)[source]

Bases: object

A generic linear transform which combines an affine transform with a given matrix and an additional shift.

Parameters:
  • tmatr – translational matrix (if None, use a unity matrix)
  • shift – added shift (if None, use a zero shift)
  • ndim – if both tmatr and shift are None, specifies the dimensionality of the transform; otherwise, ignored
i(coord, shift=True)[source]
inverted()[source]

Return inverted transformation

preceded(trans)[source]

Return a combined transformation which result from applying this transformation followed by trans

followed(trans)[source]

Return a combined transformation which result from applying trans followed by this transformation

shifted(shift, preceded=False)[source]

Return a transform with an added shift before or after (depending of preceded) the current one

multiplied(mult, preceded=False)[source]

Return a transform with an added scaling before or after (depending of preceded) the current one.

mult can be a single number (scale), a 1D vector (scaling for each axis independently), or a matrix.

rotated2d(deg, preceded=False)[source]

Return a transform with an added rotation before or after (depending of preceded) the current one.

Only applies to 2D transforms.

class pylablib.core.dataproc.transform.Indexed2DTransform(tmatr=None, shift=None, rigid=False)[source]

Bases: pylablib.core.dataproc.transform.LinearTransform

A restriction of LinearTransform which only applies to 2D and only allows rotations by multiples of 90 degrees.

Parameters:
  • tmatr – translational matrix (if None, use a unity matrix)
  • shift – added shift (if None, use a zero shift)
  • rigid – if True, only allow orthogonal transforms, i.e., no scaling
rotated2d(deg, preceded=False)[source]

Return a transform with an added rotation before or after (depending of preceded) the current one.

Only applies to 2D transforms.

followed(trans)

Return a combined transformation which result from applying trans followed by this transformation

i(coord, shift=True)
inverted()

Return inverted transformation

multiplied(mult, preceded=False)

Return a transform with an added scaling before or after (depending of preceded) the current one.

mult can be a single number (scale), a 1D vector (scaling for each axis independently), or a matrix.

preceded(trans)

Return a combined transformation which result from applying this transformation followed by trans

shifted(shift, preceded=False)

Return a transform with an added shift before or after (depending of preceded) the current one

pylablib.core.dataproc.utils module

Generic utilities for dealing with numerical arrays.

pylablib.core.dataproc.utils.is_ascending(trace)[source]

Check the if the trace is ascending.

If it has more than 1 dimension, check all lines along 0’th axis.

pylablib.core.dataproc.utils.is_descending(trace)[source]

Check if the trace is descending.

If it has more than 1 dimension, check all lines along 0’th axis.

pylablib.core.dataproc.utils.is_ordered(trace)[source]

Check if the trace is ordered (ascending or descending).

If it has more than 1 dimension, check all lines along 0’th axis.

pylablib.core.dataproc.utils.is_linear(trace)[source]

Check if the trace is linear (values go with a constant step).

If it has more than 1 dimension, check all lines along 0’th axis (with the same step for all).

pylablib.core.dataproc.utils.get_x_column(t, x_column=None, idx_default=False)[source]

Get x column of the table.

x_column can be
  • an array: return as is;
  • '#': return index array;
  • None: equivalent to ‘#’ for 1D data if idx_default==False, or to 0 otherwise;
  • integer: return the column with this index.
pylablib.core.dataproc.utils.get_y_column(t, y_column=None)[source]

Get y column of the table.

y_column can be
  • an array: return as is;
  • '#': return index array;
  • None: return t for 1D data, or the column 1 otherwise;
  • integer: return the column with this index.
pylablib.core.dataproc.utils.sort_by(t, x_column=None, reverse=False, stable=False)[source]

Sort a table using selected column as a key and preserving rows.

If reverse==True, sort in descending order. x_column values are described in get_x_column(). If stable==True, use stable sort (could be slower and uses more memory, but preserves the order of elements for the same key)

pylablib.core.dataproc.utils.filter_by(t, columns=None, pred=None, exclude=False)[source]

Filter 1D or 2D array using a predicate.

If the data is 2D, columns contains indices of columns to be passed to the pred function. If exclude==False, drop all of the rows satisfying pred rather than keep them.

pylablib.core.dataproc.utils.unique_slices(t, u_column)[source]

Split a table into subtables with different values in a given column.

Return a list of t subtables, each of which has a different (and equal among all rows in the subtable) value in u_column.

pylablib.core.dataproc.utils.merge(ts, idx=None, as_array=True)[source]

Merge several tables column-wise.

If idx is not None, then it is a list of index columns (one column per table) used for merging. The rows that have the same value in the index columns are merged; if some values aren’t contained in all the ts, the corresponding rows are omitted. If idx is None, just join the tables together (they must have the same number of rows).

If as_array==True, return a simple numpy array as a result; otherwise, return a pandas DataFrame if applicable (note that in this case all column names in all tables must be different to avoid conflicts)

class pylablib.core.dataproc.utils.Range(start=None, stop=None)[source]

Bases: object

Single data range.

If start or stop are None, it’s implied that they’re at infinity (i.e., Range(None,None) is infinite). If the range object is None, it’s implied that the range is empty

start
stop
contains(x)[source]

Check if x is in the range

intersect(*rngs)[source]

Find an intersection of multiple ranges.

If the intersection is empty, return None.

rescale(mult=1.0, shift=0.0)[source]
tup()[source]
pylablib.core.dataproc.utils.find_closest_arg(xs, x, approach='both', ordered=False)[source]

Find the index of a value in xs that is closest to x.

approach can take values 'top', 'bottom' or 'both' and denotes from which side should array elements approach x (meaning that the found array element should be >x, <x or just the closest one). If there are no elements lying on the desired side of x (e.g. approach=='top' and all elements of xs are less than x), the function returns None. if ordered==True, then xs is assumed to be in ascending or descending order, and binary search is implemented (works only for 1D arrays). if there are recurring elements, return any of them.

pylablib.core.dataproc.utils.find_closest_value(xs, x, approach='both', ordered=False)[source]
pylablib.core.dataproc.utils.get_range_indices(xs, xs_range, ordered=False)[source]

Find trace indices corresponding to the given range.

The range is defined as xs_range[0]:xs_range[1], or infinite if xs_range=None (so the data is returned unchanged in that case). If ordered==True, then the function assumes that xs in ascending or descending order.

pylablib.core.dataproc.utils.cut_to_range(t, xs_range, x_column=None, ordered=False)[source]

Cut the table to the given range based on x_column.

The range is defined as xs_range[0]:xs_range[1], or infinite if xs_range=None. x_column is used to determine which column’s values to use to check if the point is in range (see get_x_column()). If ordered_x==True, then the function assumes that x_column in ascending order.

pylablib.core.dataproc.utils.cut_out_regions(t, regions, x_column=None, ordered=False, multi_pass=True)[source]

Cut the regions out of the t based on x_column.

x_column is used to determine which column’s values to use to check if the point is in range (see get_x_column()). If ordered_x==True, then the function assumes that x_column in ascending order. If multi_pass==False, combine all indices before deleting the data in a single operation (works faster, but only for non-intersecting regions).

pylablib.core.dataproc.utils.find_discrete_step(trace, min_fraction=1e-08, tolerance=1e-05)[source]

Try to find a minimal divisor of all steps in a 1D trace.

min_fraction is the minimal possible size of the divisor (relative to the minimal non-zero step size). tolerance is the tolerance of the division. Raise an ArithmeticError if no such value was found.

pylablib.core.dataproc.utils.unwrap_mod_data(trace, wrap_range)[source]

Unwrap data given wrap_range.

Assume that every jump greater than 0.5*wrap_range is not real and is due to value being restricted. Can be used to, e.g., unwrap the phase data.

pylablib.core.dataproc.utils.pad_trace(trace, pad, mode='constant', cval=0.0)[source]

Expand 1D trace or a multi-column table for different convolution techniques.

Wrapper around numpy.pad(), but can handle pandas dataframes or multi-column arrays. Note that the index data is not preserved.

Parameters:
  • trace – 1D array-like object.
  • pad (int or tuple) – Expansion size. Can be an integer, if pad on both sides is equal, or a 2-tuple (left, right) for pads on opposite sides.
  • mode (str) – Expansion mode. Takes the same values as numpy.pad(). Common values are 'constant' (added values are determined by cval), 'edge' (added values are end values of the trace), 'reflect' (reflect trace with respect to its endpoint) or 'wrap' (wrap the values from the other size).
  • cval (float) – If mode=='constant', determines the expanded values.
pylablib.core.dataproc.utils.xy2c(t)[source]

Convert a trace or a table from xy representation to a single complex data.

t is a 2D array with either 2 columns (x and y) or 3 columns (index, x and y). Return 2D array with either 1 column (c) or 2 columns (index and c).

pylablib.core.dataproc.utils.c2xy(t)[source]

Convert the a trace or a table from complex representation to a split x and y data.

t is either 1D array (c data) or a 2D array with either 1 column (c) or 2 columns (index and c). Return 2D array with either 2 column (x and y) or 3 columns (index, x and y).

Module contents