From d9ca3acb309aab62cdde091f9b435609f9168f41 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 25 Oct 2023 09:42:36 +0000 Subject: [PATCH] Deploy to GitHub pages --- index.html | 1959 ++++++++++++++++++++++++++++++++++++++++++++++++++++ util.html | 584 ++++++++++++++++ 2 files changed, 2543 insertions(+) create mode 100644 index.html create mode 100644 util.html diff --git a/index.html b/index.html new file mode 100644 index 0000000..a20eba4 --- /dev/null +++ b/index.html @@ -0,0 +1,1959 @@ + + + + + + +scm_confocal API documentation + + + + + + + + + + + +
+
+
+

Package scm_confocal

+
+
+
+
+

Sub-modules

+
+
scm_confocal.util
+
+
+
+
+
+
+
+
+
+
+

Classes

+
+
+class sp8_image +(filename, image) +
+
+

Subclass of sp8_lif for relevant attributes and functions for a specific +image in the .lif file. Should not be called directly, but rather be +obtained through sp8_lif.get_image()

+

Parameters

+
+
filename : str
+
file name of the parent .lif file
+
image : int
+
index number of the image in the parent .lif file
+
+

Attributes

+
+
image : int
+
index number of the image in the parent .lif file
+
lifimage : readlif.LifImage class instance
+
The underlying class instance of the readlif library.
+
+

Additionally, attributes and functions of the parent sp8_lif instance are +inherited and directly accessible, as well as all attributes of the +readlif.LifImage instance.

+

inherit all functions and attributes from parent sp8_lif class and +add some image specific ones

+

Ancestors

+
    +
  • scm_confocal.sp8.sp8_lif
  • +
+

Methods

+
+
+def export_with_scalebar(self, frame=0, channel=0, filename=None, **kwargs) +
+
+

saves an exported image of the confocal slice with a scalebar in one of +the four corners, where barsize is the scalebar size in data units +(e.g. µm) and scale the overall size of the scalebar and text with +respect to the width of the image. Additionally, a colormap is applied +to the data for better visualisation.

+

Parameters

+
+
frame : int, optional
+
index of the frame to export. The default is 0.
+
channel : int or list of int, optional
+
the channel to pull the image data from. For displaying multiple +channels in a single image, a list of channel indices can be given, +as well as a list of colormaps for each channel through the cmap +parameter. The default is 0.
+
filename : string or None, optional
+
Filename + extension to use for the export file. The default is the +filename sans extension of the original .lif file, with +image name and '_exported.png' appended.
+
crop : tuple or None, optional
+
+

range describing a area of the original image (before rescaling the +resolution) to crop out for the export image. Can have two forms:

+
    +
  • +

    ((xmin,ymin),(xmax,ymax)), with the integer indices of the top +left and bottom right corners respectively.

    +
  • +
  • +

    (xmin,ymin,w,h) with the integer indices of the top left corner +and the width and heigth of the cropped image in pixels (prior to +optional rescaling using resolution).

    +
  • +
+

The default is None which takes the entire image.

+
+
crop_unit : 'pixels' or 'data', optional
+
sets the unit in which the width and height in crop are +specified when using the (x,y,w,h) format, with 'pixels' to give +the size in pixels or 'data' to specify the size in the physical +unit used for the scalebar (after optional unit conversion via the +convert parameter). Note that the position of the top left corner +is given in pixels. The ((xmin,ymin),(xmax,ymax)) format must be +always given in pixels, and crop_unit is ignored if crop is +given in this format. The default is 'pixels'.
+
resolution : int, optional
+
the resolution along the x-axis (i.e. image width in pixels) to use +for the exported image. The default is None, which uses the size +of the original image (after optional cropping using crop).
+
cmap : str or callable or list of str or list of callable, optional
+
+

name of a named Matplotlib colormap used to color the data. see the +Matplotlib documentation +for more information. The default is 'inferno'.

+

In addition to the colormaps listed there, the following maps for +linearly incrementing pure RGB channels are available, useful for +e.g. displaying multichannel data with complementary colors (no +overlap between between colormaps possible): +['pure_reds', 'pure_greens', 'pure_blues', 'pure_yellows', +'pure_cyans', 'pure_purples','pure_greys'] +where for example 'pure_reds' scales between RGB values (0,0,0) +and +(255,0,0), and 'pure_cyans' between (0,0,0) and +(0,255,255).

+

Alternatively, a fully custom colormap may be used by entering a +ListedColormap +or LinearSegmentedColormap +object from the Matplotlib.colors module. For more information on +creating colormaps, see the Matplotlib documentation linked above.

+

For multichannel data, a list of colormaps must be provided, with +a separate colormap for each channel.

+
+
cmap_range : tuple of form (min,max) or None or 'automatic', optional
+
sets the scaling of the colormap. The minimum and maximum +values to map the colormap to, values outside of this range will +be colored according to the min and max value of the colormap. The +default is +None, which is to take the lowest and highest value +in the image. Alternatively 'automatic' may be specified which +scales between the 10th and 99th percentile. For multichannel data +a list of cmap_range options per channel may be provided.
+
draw_bar : boolean, optional
+
whether to draw a scalebar on the image, such that this function +may be used to put other text on the image or just to apply a +colormap (by setting draw_bar=False and draw_text=False). The +default is True.
+
barsize : float or None, optional
+
size (in data units matching the original scale bar, e.g. nm) of +the scale bar to use. The default None, wich takes the desired +length for the current scale (ca. 15% of the width of the image for +scale=1) and round this to the nearest option from a list of +"nice" values.
+
scale : float, optional
+
factor to change the size of the scalebar+text with respect to the +width of the image. Scale is chosen such, that at scale=1 the +font size of the scale bar text is approximately 10 pt when +the image is printed at half the width of the text in a typical A4 +paper document (e.g. two images side-by-side). Note that this is +with respect to the output image, so after optional cropping +and/or up/down sampling has been applied. The default is 1.
+
loc : int, one of [0,1,2,3], optional
+
Location of the scalebar on the image, where 0, 1, 2 and 3 +refer to the top left, top right, bottom left and bottom right +respectively. The default is 2, which is the bottom left corner.
+
convert : str, one of ['fm','pm','Å' or A,'nm','µm' or 'um','mm','cm','dm','m'], optional
+
Unit that will be used for the scale bar, the value will be +automatically converted if this unit differs from the pixel size +unit. The default is None, which uses micrometers.
+
barcolor : tuple of ints, optional
+
RGB color to use for the scalebar and text, given as a tuple of +form (R,G,B) or (R,G,B,A) where R, G B and A are values between 0 +and 255 for red, green, blue and alpha respectively. The default is +(255,255,255), which gives a white scalebar.
+
barthickness : int, optional
+
thickness in printer points of the scale bar itself. The default is +16.
+
barpad : int, optional
+
size in printer points of the padding between the scale bar and the +surrounding box. The default is 10.
+
draw_text : bool, optional
+
whether to draw the text specified in text on the image, the text +is place above the scale bar if draw_bar=True. The default is +True.
+
text : str, optional
+
the text to draw on the image (above the scale bar if +draw_bar=True). The default is None, which gives the size and +unit of the scale bar (e.g. '10 µm').
+
font : str, optional
+
filename of an installed TrueType font ('.ttf' file) to use for the +text on the scalebar. The default is 'arialbd.ttf'.
+
fontsize : int, optional
+
base font size to use for the scale bar text. The default is 16. +Note that this size will be re-scaled according to resolution and +scale.
+
fontcolor : tuple of int, optional
+
(R,G,B) tuple where R, G and B are red, green and blue values from +0 to 255. The default is (255,255,255) giving white text.
+
fontbaseline : int, optional
+
vertical offset for the baseline of the scale bar text in from the +top of the scale bar in printer points. The default is 10.
+
fontpad : int, optional
+
minimum size in printer points of the space/padding between the +text and surrounding box. The default is 10.
+
draw_box : bool, optional
+
Whether to put a colored box behind the scalebar and text to +enhance contrast on busy images. The default is False.
+
boxcolor : tuple of ints, optional
+
RGB color to use for the box behind/around the scalebar and text, +given as a tuple of form (R,G,B) or (R,G,B,A) where R, G B and A +are values between 0 and 255 for red, green and blue respectively. +If no A is given, boxopacity is used. The default is (0,0,0) +which gives a black box.
+
boxopacity : int, optional
+
value between 0 and 255 for the opacity/alpha of the box, useful +for creating a semitransparent box. The default is 255.
+
boxpad : int, optional
+
size of the space/padding around the box (with respect to the sides +of the image) in printer points. The default is 10.
+
save : bool, optional
+
whether to save the image as file. The default is True.
+
show_figure : bool, optional
+
whether to open matplotlib figure windows. The default is True.
+
+

Returns

+
+
Y×X×4 numpy.array containing the BGRA pixel data
+
 
+
+
+
+def get_channel(self, chl) +
+
+

get info from the metadata on a specific channel

+

Parameters

+
+
chl : int
+
index number of the channel.
+
+

Returns

+
+
channel : dict
+
dictionary containing all metadata for that channel
+
+
+
+def get_channels(self) +
+
+

parse the images xml data for the channels.

+

Returns

+
+
list of dictionaries
+
 
+
+
+
+def get_detector_settings(self) +
+
+

Parses the xml metadata for the detector settings.

+

Returns

+
+
dictionary or (in case of multichannel data) a list thereof
+
 
+
+
+
+def get_dimension(self, dim) +
+
+

Gets the dimension data for a particular dimension of an image. +Dimension can be given both as integer index (as specified by the Leica +MetaData which may not correspond to the indexing order of the data +stack) or as string containing the physical meaning, e.g. 'x-axis', +'time', 'excitation wavelength', etc.

+

Parameters

+
+
dim : int or str
+
dimension to get metadata of specified as integer or as name.
+
+

Returns

+
+
dimension : dict
+
dictionary containing all metadata for that dimension
+
+
+
+def get_dimension_steps(self, dim, load_stack_indices=False) +
+
+

returns a list of corresponding physical values for all steps along +a given dimension, e.g. a list of time steps or x coordinates. +Dimension can be given both as integer index (as specified by the Leica +MetaData, which may not correspond to the indexing order of the data +stack), or as string containing the physical meaning, e.g. 'x-axis', +'time', 'excitation wavelength', etc.

+

Parameters

+
+
dim : int or str
+
dimension to get metadata of specified as integer or as name.
+
+

Returns

+
+
steps : list of float
+
physical values of the steps along the chosen dimension, (e.g. +a list of pixel x-coordinates, list of time stamps, …).
+
unit : str
+
physical unit of the data.
+
+
+
+def get_dimension_stepsize(self, dim) +
+
+

returns the step size along a dimension, e.g. time interval, pixel +size, etc, as (value, unit) tuple. Dimension can be given both as +integer index (as specified by the Leica MetaData, which may not +correspond to the indexing order of the data stack), or as string +containing the physical meaning, e.g. 'x-axis', 'time', 'excitation +wavelength', etc.

+

Parameters

+
+
dim : int or str
+
dimension to get metadata of specified as integer or as name.
+
+

Returns

+
+
stepsize : float
+
physical size of one step (e.g. pixel, time interval, …).
+
unit : str
+
physical unit of the data.
+
+
+
+def get_dimensions(self) +
+
+

parse the images xml data for the dimensions.

+

Returns

+
+
list of dictionaries
+
 
+
+
+
+def get_laser_settings(self) +
+
+

Parses the xml metadata for the laser settings.

+

Returns

+
+
dictionary with laser data
+
 
+
+
+
+def get_name(self) +
+
+

shortcut for getting the name of the dataset / image for e.g. +automatically generating filenames for stored results.

+

The format is: <lif file name (without file extension)>_<image name>

+
+
+def get_pixelsize(self) +
+
+

shorthand for get_dimension_stepsize() to get the pixel/voxel size +converted to micrometer, along whatever spatial dimensions are present +in the data in order of slowest to fastest axis, i.e. typically (z,y,x) +but e.g. (y,z,x) for an xzy scan. +data are skipped.

+

Returns

+
+
pixelsize : tuple of float
+
physical size in µm of the pixels/voxels along (z,y,x)
+
+
+
+def get_stage_position(self) +
+
+

Returns base (z,y,x) position of the stage in micrometer

+
+
+def load_frame(self, i=0, channel=None) +
+
+

returns specified image frame where a frame is considered a 2D image in +the plane of the two fastes axes in the recording order (typically xy).

+

Parameters

+
+
i : int, optional
+
the index number of the requested image. The default is 0.
+
channel : int or list of int, optional
+
which channel(s) to return. For multiple channels, a tuple with an +numpy.ndarray for each image is returned, for a single channel a +single numpy.ndarray is returned. The default is to return all +channels.
+
+

Returns

+
+
frame : numpy.ndarray or tuple of numpy.ndarray
+
the raw image data values for the requested frame / channel(s)
+
+
+
+def load_plane(self, display_dims=None, indices=None) +
+
+

load 2D plane / slice of arbitrary orientation from the data

+

Parameters

+
+
display_dims : tuple of length 2, optional
+
the 2 dimensions defining the 2D image plane to load. The default +is the imaging plane (the two fastest axes, typically xy).
+
indices : dict, optional
+
index values for all other planes. The default is 0 for all dims.
+
+

Returns

+
+
np.ndarray
+
array containing the pixel values of the selected plane.
+
+
+
+def load_stack(self, dim_range=None, dtype=None, quiet=False) +
+
+

Similar to sp8_series.load_data(), but converts the 3D array of images +automatically to a np.ndarray of the appropriate dimensionality.

+

Array dimensions are specified as follows:

+
    +
  • +

    If the number of detector channels is 2 or higher, the first +array axis is the detector channel index (named 'channel').

    +
  • +
  • +

    If the number of channels is 1, the first array axis is the first +available dimension (instead of 'channel').

    +
  • +
  • +

    Each subsequent array axis corresponds to a dimension as +specified by and in reversed order of the metadata exported by +the microscope software, excluding dimensions which are not +available. The default order of dimensions in the metadata is:

    +
      +
    1. +

      'channel' (excluded for single channel data)

      +
    2. +
    3. +

      'x-axis'

      +
    4. +
    5. +

      'y-axis'

      +
    6. +
    7. +

      'z-axis'

      +
    8. +
    9. +

      'time'

      +
    10. +
    11. +

      'detection wavelength'

      +
    12. +
    13. +

      'excitation wavelength'

      +
    14. +
    15. +

      'mosaic'

      +
    16. +
    +
  • +
  • +

    As an example, a 2 channel xyt measurement would result in a 4-d +array with axis order ('channel','time','y-axis', +'x-axis'), and a single channel xyz scan would be returned as +('z-axis','y-axis','x-axis')

    +
  • +
+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range.

+

Parameters

+
+
dim_range : dict, optional
+
+

dict, with keys corresponding to channel/dimension labels as above +and int or slice objects as values. This allows you to only load +part of the data along any of the dimensions, such as only loading +one channel of multichannel data or a particular z-range. An +example use for only taking time steps up to 5 and z-slice 20 to 30 +would be:

+
dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}.
+
+

When an int is given, only that slice along the dimension is taken +and the dimensionis squeezed out of the data. The default is {}.

+
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is None which uses the same bit +depth as the original image (either 8- or 16-bit unsigned int).
+
+

Returns

+
+
data : numpy.ndarray
+
ndarray with the pixel values
+
dimorder : tuple
+
tuple with lenght data.ndim specifying the ordering of dimensions +in the data with labels from the metadata of the microscope.
+
+
+
+def print_metadata(self) +
+
+

Prints a somewhat formatted version of the full image metadata, the +xml hierarchy is indicated with prepended dashes.

+
+
+def save_metadata(self, filename=None) +
+
+

stores the image xml metadata to a file

+

Parameters

+
+
filename : str, optional
+
filename to use. The default is the result of get_name() +'_metadata.xml' appended.
+
+
+
+
+
+class sp8_lif +(filename=None, quiet=False) +
+
+

Class of functions related to the sp8 microscope, for data saved as .lif +files, the default file format for the Leica LAS-X software. Essentially +a wrapper around the readlif library, which provides access to the data +and metadata directly in Python.

+

The underlying readlif.LifFile instance can be accessed directly using +the sp8_lif.liffile attribute, and any of it attributes are accessible +through sp8_lif directly.

+

Parameters

+
+
filename : str
+
Filename of the .lif file. Extension may be (but is not required to +be) included.
+
quiet : bool, optional
+
can be used to suppress printing the contents of the file. The default +is False.
+
+

Returns

+

sp8_lif class instance

+

Attributes

+
+
liffile : readlif.LifFile instance
+
The underlying class instance of the readlif library.
+
filename : str
+
filename of the loaded .lif file with file extention included, even if +it was not given when initializing the class.
+
+

See Also

+
+
sp8_image
+
a subclass for specific images in the dataset.
+
readlif
+
the library used for acessing the files, which can be found +here Initialize the class instance and the underlying LifFile instance
+
+

Subclasses

+
    +
  • scm_confocal.sp8.sp8_image
  • +
+

Methods

+
+
+def get_image(self, image=0) +
+
+

returns an sp8_image instance containing relevant attributes and +functions for the specific image in the dataset, which provides the +"bread and butter" of data access.

+

Parameters

+
+
image : int or str, optional
+
The image (or image series) to obtain. May be given as index number +(int) or as the name of the series (string). The default is the +first image in the file.
+
+

Returns

+

sp8_image class instance

+
+
+def get_liffile_image(self, image=0) +
+
+

returns the readlif.LifImage instance for a particular image in the +dataset.

+

Parameters

+
+
image : int or str, optional
+
The image (or image series) to obtain. May be given as index number +(int) or as the name of the series (string). The default is the +first image in the file.
+
+

Returns

+

readlif.LifImage class instance

+
+
+def get_name(self) +
+
+

shortcut for getting the name (filename sans extention) of the dataset +for e.g. automatically generating filenames for stored results.

+
+
+def save_metadata(self, filename=None) +
+
+

stores the xml metadata to a file

+

Parameters

+
+
filename : str, optional
+
filename to use. The default is the name of the Lif file with +'_metadata.xml' appended.
+
+
+
+
+
+class sp8_series +(fmt='*.tif') +
+
+

Class of functions related to the sp8 microscope. The functions assume that +the data are exported as .tif files and placed in a own folder per series. +The current working directory is assumed to be that folder. For several +functions it is required that the xml metadata is present in a subfolder of +the working directory called 'MetaData', which is normally generated +automatically when exporting tif files as raw data.

+

Attributes

+
+
filenames : list of str
+
the filenames loaded associated with the series
+
data : numpy array
+
the image data as loaded on the most recent call of +sp8_series.load_data()
+
metadata : xml.Elementtree root
+
the recording parameters associated with the image series
+
+

Initialize the class instance and assign the filenames of the data.

+

Parameters

+
+
fmt : str, optional
+
format to use for finding the files. Uses the notation of the glob +library. The default is '*.tif'.
+
+

Methods

+
+
+def export_with_scalebar(self, frame=0, channel=0, filename=None, **kwargs) +
+
+

saves an exported image of the confocal slice with a scalebar in one of +the four corners, where barsize is the scalebar size in data units +(e.g. µm) and scale the overall size of the scalebar and text with +respect to the width of the image. Additionally, a colormap is applied +to the data for better visualisation.

+

Parameters

+
+
frame : int, optional
+
index of the frame to export. The default is 0.
+
channel : int or list of int, optional
+
the channel to pull the image data from. For displaying multiple +channels in a single image, a list of channel indices can be given, +as well as a list of colormaps for each channel through the cmap +parameter. The default is 0.
+
filename : string or None, optional
+
Filename + extension to use for the export file. The default is the +filename sans extension of the original TEM file, with +'_exported.png' appended.
+
crop : tuple or None, optional
+
+

range describing a area of the original image (before rescaling the +resolution) to crop out for the export image. Can have two forms:

+
    +
  • +

    ((xmin,ymin),(xmax,ymax)), with the integer indices of the top +left and bottom right corners respectively.

    +
  • +
  • +

    (xmin,ymin,w,h) with the integer indices of the top left corner +and the width and heigth of the cropped image in pixels (prior to +optional rescaling using resolution).

    +
  • +
+

The default is None which takes the entire image.

+
+
crop_unit : 'pixels' or 'data', optional
+
sets the unit in which the width and height in crop are +specified when using the (x,y,w,h) format, with 'pixels' to give +the size in pixels or 'data' to specify the size in the physical +unit used for the scalebar (after optional unit conversion via the +convert parameter). Note that the position of the top left corner +is given in pixels. The ((xmin,ymin),(xmax,ymax)) format must be +always given in pixels, and crop_unit is ignored if crop is +given in this format. The default is 'pixels'.
+
resolution : int, optional
+
the resolution along the x-axis (i.e. image width in pixels) to use +for the exported image. The default is None, which uses the size +of the original image (after optional cropping using crop).
+
cmap : str or callable or list of str or list of callable, optional
+
+

name of a named Matplotlib colormap used to color the data. see the +Matplotlib documentation +for more information. The default is 'inferno'.

+

In addition to the colormaps listed there, the following maps for +linearly incrementing pure RGB channels are available, useful for +e.g. displaying multichannel data with complementary colors (no +overlap between between colormaps possible): +['pure_reds', 'pure_greens', 'pure_blues', 'pure_yellows', +'pure_cyans', 'pure_purples','pure_greys'] +where for example 'pure_reds' scales between RGB values (0,0,0) +and +(255,0,0), and 'pure_cyans' between (0,0,0) and +(0,255,255).

+

Alternatively, a fully custom colormap may be used by entering a +ListedColormap +or LinearSegmentedColormap +object from the Matplotlib.colors module. For more information on +creating colormaps, see the Matplotlib documentation linked above.

+

For multichannel data, a list of colormaps must be provided, with +a separate colormap for each channel.

+
+
cmap_range : tuple of form (min,max) or None or 'automatic', optional
+
sets the scaling of the colormap. The minimum and maximum +values to map the colormap to, values outside of this range will +be colored according to the min and max value of the colormap. The +default is +None, which is to take the lowest and highest value +in the image. Alternatively 'automatic' may be specified which +scales between the 10th and 99th percentile. For multichannel data +a list of cmap_range options per channel may be provided.
+
draw_bar : boolean, optional
+
whether to draw a scalebar on the image, such that this function +may be used to put other text on the image or just to apply a +colormap (by setting draw_bar=False and draw_text=False). The +default is True.
+
barsize : float or None, optional
+
size (in data units matching the original scale bar, e.g. nm) of +the scale bar to use. The default None, wich takes the desired +length for the current scale (ca. 15% of the width of the image for +scale=1) and round this to the nearest option from a list of +"nice" values.
+
scale : float, optional
+
factor to change the size of the scalebar+text with respect to the +width of the image. Scale is chosen such, that at scale=1 the +font size of the scale bar text is approximately 10 pt when +the image is printed at half the width of the text in a typical A4 +paper document (e.g. two images side-by-side). Note that this is +with respect to the output image, so after optional cropping +and/or up/down sampling has been applied. The default is 1.
+
loc : int, one of [0,1,2,3], optional
+
Location of the scalebar on the image, where 0, 1, 2 and 3 +refer to the top left, top right, bottom left and bottom right +respectively. The default is 2, which is the bottom left corner.
+
convert : str, one of ['fm','pm','Å' or A,'nm','µm' or 'um','mm','cm','dm','m'], optional
+
Unit that will be used for the scale bar, the value will be +automatically converted if this unit differs from the pixel size +unit. The default is None, which uses micrometers.
+
barcolor : tuple of ints, optional
+
RGB color to use for the scalebar and text, given as a tuple of +form (R,G,B) or (R,G,B,A) where R, G B and A are values between 0 +and 255 for red, green, blue and alpha respectively. The default is +(255,255,255), which gives a white scalebar.
+
barthickness : int, optional
+
thickness in printer points of the scale bar itself. The default is +16.
+
barpad : int, optional
+
size in printer points of the padding between the scale bar and the +surrounding box. The default is 10.
+
draw_text : bool, optional
+
whether to draw the text specified in text on the image, the text +is place above the scale bar if draw_bar=True. The default is +True.
+
text : str, optional
+
the text to draw on the image (above the scale bar if +draw_bar=True). The default is None, which gives the size and +unit of the scale bar (e.g. '10 µm').
+
font : str, optional
+
filename of an installed TrueType font ('.ttf' file) to use for the +text on the scalebar. The default is 'arialbd.ttf'.
+
fontsize : int, optional
+
base font size to use for the scale bar text. The default is 16. +Note that this size will be re-scaled according to resolution and +scale.
+
fontcolor : tuple of int, optional
+
(R,G,B) tuple where R, G and B are red, green and blue values from +0 to 255. The default is (255,255,255) giving white text.
+
fontbaseline : int, optional
+
vertical offset for the baseline of the scale bar text in from the +top of the scale bar in printer points. The default is 10.
+
fontpad : int, optional
+
minimum size in printer points of the space/padding between the +text and surrounding box. The default is 10.
+
draw_box : bool, optional
+
Whether to put a colored box behind the scalebar and text to +enhance contrast on busy images. The default is False.
+
boxcolor : tuple of ints, optional
+
RGB color to use for the box behind/around the scalebar and text, +given as a tuple of form (R,G,B) or (R,G,B,A) where R, G B and A +are values between 0 and 255 for red, green and blue respectively. +If no A is given, boxopacity is used. The default is (0,0,0) +which gives a black box.
+
boxopacity : int, optional
+
value between 0 and 255 for the opacity/alpha of the box, useful +for creating a semitransparent box. The default is 255.
+
boxpad : int, optional
+
size of the space/padding around the box (with respect to the sides +of the image) in printer points. The default is 10.
+
save : bool, optional
+
whether to save the image as file. The default is True.
+
show_figure : bool, optional
+
whether to open matplotlib figure windows. The default is True.
+
+

Returns

+
+
Y×X×4 numpy.array containing the BGRA pixel data
+
 
+
+
+
+def get_dimension_steps(self, dim, load_stack_indices=False) +
+
+

Gets a list of values for each step along the specified dimension, e.g. +a list of timestamps for the images or a list of height values for all +slices of a z-stack. For specification of dimensions, see +sp8_series.get_metadata_dimension()

+

Parameters

+
+
dim : int or str
+
dimension to get steps for
+
load_stack_indices : bool
+
if True, trims down the list of steps based on the dim_range used +when last loading data with load_stack
+
+

Returns

+
+
steps : list
+
list of values for every logical step in the data
+
unit : str
+
physical unit of the step values
+
+
+
+def get_dimension_stepsize(self, dim) +
+
+

Get the size of a single step along the specified dimension, e.g. +the pixelsize in x, y or z, or the time between timesteps. For +specification of dimensions, see sp8_series.get_metadata_dimension()

+

Parameters

+
+
dim : int or str
+
dimension to get stepsize for
+
+

Returns

+
+
value : float
+
stepsize
+
unit : int
+
physical unit of value
+
+
+
+def get_laser_settings(self) +
+
+

Parses the xml metadata for the laser settings.

+

Returns

+
+
dictionary with laser data
+
 
+
+
+
+def get_metadata_channels(self) +
+
+

Gets the channel information from the metadata

+

Returns

+
+
channels : list of dict
+
list of dictionaries with length equal to number of channels where +each dict contains the metadata for one channel
+
+
+
+def get_metadata_dimension(self, dim) +
+
+

Gets the dimension data for a particular dimension. Dimension can be +given both as integer index (as specified by the Leica exported +MetaData which may not correspond to the indexing order of the data +stack) or as string containing the physical meaning, e.g. 'x-axis', +'time', 'excitation wavelength', etc.

+

Parameters

+
+
dim : int or str
+
dimension to get metadata of specified as integer or as name.
+
+

Returns

+
+
dimension : dict
+
dictionary containing all metadata for that dimension
+
+
+
+def get_metadata_dimensions(self) +
+
+

Gets the dimension information from the metadata

+

Returns

+
+
dimensions : list of dict
+
list of dictionaries with length number of dimensions where +each dict contains the metadata for one data dimension
+
+
+
+def get_name(self) +
+
+

Returns a string containing the filename (sans file extension) under +which the series is saved.

+

Returns

+
+
name : str
+
name of the series
+
+
+
+def get_pixelsize(self) +
+
+

shorthand for get_dimension_stepsize() to get the pixel/voxel size +converted to micrometer, along whatever spatial dimensions are present +in the data. Is given as (z,y,x) where dimensions not present in the +data are skipped.

+

Returns

+
+
pixelsize : tuple of float
+
physical size in µm of the pixels/voxels along (z,y,x)
+
+
+
+def get_series_name(self) +
+
+

Deprecated, renamed to get_name()

+
+
+def load_data(self, filenames=None, first=None, last=None, dtype=numpy.uint8) +
+
+

Loads the sequence of images into ndarray of form (files,y,x) and +converts the data to dtype

+

Parameters

+
+
filenames : list of str, optional
+
filenames of images to load. The default is what is passed from +init, which by default is all .tif images in the current +working directory.
+
first : None or int, optional
+
index of first image to load. The default is None.
+
last : None or int, optional
+
index of last image to load plus one. The default is None.
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint8.
+
+

Returns

+
+
data : numpy.ndarray
+
3d numpy array with dimension order (filenames,y,x).
+
+
+
+def load_metadata(self) +
+
+

Load the xml metadata exported with the files as xml_root object which +can be indexed with xml.etree.ElementTree

+

Returns

+
+
metadata : xml.etree.ElementTree object
+
Parsable xml tree object containing all the metadata
+
+
+
+def load_stack(self, dim_range=None, dtype=numpy.uint8) +
+
+

Similar to sp8_series.load_data(), but converts the 3D array of images +automatically to a np.ndarray of the appropriate dimensionality.

+

Array dimensions are specified as follows:

+
    +
  • +

    If the number of detector channels is 2 or higher, the first +array axis is the detector channel index (named 'channel').

    +
  • +
  • +

    If the number of channels is 1, the first array axis is the first +available dimension (instead of 'channel').

    +
  • +
  • +

    Each subsequent array axis corresponds to a dimension as +specified by and in reversed order of the metadata exported by +the microscope software, excluding dimensions which are not +available. The default order of dimensions in the metadata is:

    +
      +
    1. +

      'channel' (excluded for single channel data)

      +
    2. +
    3. +

      'x-axis'

      +
    4. +
    5. +

      'y-axis'

      +
    6. +
    7. +

      'z-axis'

      +
    8. +
    9. +

      'time'

      +
    10. +
    11. +

      'detection wavelength'

      +
    12. +
    13. +

      'excitation wavelength'

      +
    14. +
    +
  • +
  • +

    As an example, a 2 channel xyt measurement would result in a 4-d +array with axis order ('channel','time','y-axis', +'x-axis'), and a single channel xyz scan would be returned as +('z-axis','y-axis','x-axis')

    +
  • +
+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range.

+

Parameters

+
+
dim_range : dict, optional
+
+

dict, with keys corresponding to channel/dimension labels as above +and int or slice objects as values. This allows you to only load +part of the data along any of the dimensions, such as only loading +one channel of multichannel data or a particular z-range. An +example use for only taking time steps up to 5 and z-slice 20 to 30 +would be:

+
dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}.
+
+

When an int is given, only that slice along the dimension is taken +and the dimensionis squeezed out of the data. The default is {}.

+
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint8.
+
+

Returns

+
+
data : numpy.ndarray
+
ndarray with the pixel values
+
dimorder : tuple
+
tuple with lenght data.ndim specifying the ordering of dimensions +in the data with labels from the metadata of the microscope.
+
+
+
+
+
+class visitech_faststack +(filename, zsize, zstep, zbacksteps, zstart=0, magnification=63, binning=1) +
+
+

functions for fast stacks taken with the custom MicroManager Visitech +driver, saved to multipage .ome.tiff files containing entire stack

+

initialize class (lazy-loads data)

+

Parameters

+
+
filenames : string
+
name of first ome.tiff file (extension optional)
+
zsize : float
+
z size (in um) of stack (first im to last)
+
zstep : float
+
step size in z
+
zbacksteps : int
+
number of backwards steps in z direction after each stack
+
zstart : float
+
actual height of bottom of stack/lowest slice. The default is 0.
+
magnification : float, optional
+
magnification of objective lens used. The default is 63.
+
binning : int
+
binning factor performed at the detector level, e.g. in +MicroManager software, in XY
+
+

Methods

+
+
+def export_with_scalebar(self, stack=0, zslice=0, filename=None, **kwargs) +
+
+

saves an exported image of the confocal slice with a scalebar in one of +the four corners, where barsize is the scalebar size in data units +(e.g. µm) and scale the overall size of the scalebar and text with +respect to the width of the image. Additionally, a colormap is applied +to the data for better visualisation.

+

Parameters

+
+
stack : int, optional
+
integer index of the z-stack to take the frame to export from. The +default is 0.
+
zslice : int, optional
+
integer index of the frame within stack to export. The default is +0.
+
filename : string or None, optional
+
Filename + extension to use for the export file. The default is the +filename sans extension of the original TEM file, with +'_exported.png' appended.
+
crop : tuple or None, optional
+
+

range describing a area of the original image (before rescaling the +resolution) to crop out for the export image. Can have two forms:

+
    +
  • +

    ((xmin,ymin),(xmax,ymax)), with the integer indices of the top +left and bottom right corners respectively.

    +
  • +
  • +

    (xmin,ymin,w,h) with the integer indices of the top left corner +and the width and heigth of the cropped image in pixels (prior to +optional rescaling using resolution).

    +
  • +
+

The default is None which takes the entire image.

+
+
crop_unit : 'pixels' or 'data', optional
+
sets the unit in which the width and height in crop are +specified when using the (x,y,w,h) format, with 'pixels' to give +the size in pixels or 'data' to specify the size in the physical +unit used for the scalebar (after optional unit conversion via the +convert parameter). Note that the position of the top left corner +is given in pixels. The ((xmin,ymin),(xmax,ymax)) format must be +always given in pixels, and crop_unit is ignored if crop is +given in this format. The default is 'pixels'.
+
resolution : int, optional
+
the resolution along the x-axis (i.e. image width in pixels) to use +for the exported image. The default is None, which uses the size +of the original image (after optional cropping using crop).
+
cmap : str or callable or list of str or list of callable, optional
+
+

name of a named Matplotlib colormap used to color the data. see the +Matplotlib documentation +for more information. The default is 'inferno'.

+

In addition to the colormaps listed there, the following maps for +linearly incrementing pure RGB channels are available, useful for +e.g. displaying multichannel data with complementary colors (no +overlap between between colormaps possible): +['pure_reds', 'pure_greens', 'pure_blues', 'pure_yellows', +'pure_cyans', 'pure_purples','pure_greys'] +where for example 'pure_reds' scales between RGB values (0,0,0) +and +(255,0,0), and 'pure_cyans' between (0,0,0) and +(0,255,255).

+

Alternatively, a fully custom colormap may be used by entering a +ListedColormap +or LinearSegmentedColormap +object from the Matplotlib.colors module. For more information on +creating colormaps, see the Matplotlib documentation linked above.

+

For multichannel data, a list of colormaps must be provided, with +a separate colormap for each channel.

+
+
cmap_range : tuple of form (min,max) or None or 'automatic', optional
+
sets the scaling of the colormap. The minimum and maximum +values to map the colormap to, values outside of this range will +be colored according to the min and max value of the colormap. The +default is +None, which is to take the lowest and highest value +in the image. Alternatively 'automatic' may be specified which +scales between the 10th and 99th percentile. For multichannel data +a list of cmap_range options per channel may be provided.
+
draw_bar : boolean, optional
+
whether to draw a scalebar on the image, such that this function +may be used to put other text on the image or just to apply a +colormap (by setting draw_bar=False and draw_text=False). The +default is True.
+
barsize : float or None, optional
+
size (in data units matching the original scale bar, e.g. nm) of +the scale bar to use. The default None, wich takes the desired +length for the current scale (ca. 15% of the width of the image for +scale=1) and round this to the nearest option from a list of +"nice" values.
+
scale : float, optional
+
factor to change the size of the scalebar+text with respect to the +width of the image. Scale is chosen such, that at scale=1 the +font size of the scale bar text is approximately 10 pt when +the image is printed at half the width of the text in a typical A4 +paper document (e.g. two images side-by-side). Note that this is +with respect to the output image, so after optional cropping +and/or up/down sampling has been applied. The default is 1.
+
loc : int, one of [0,1,2,3], optional
+
Location of the scalebar on the image, where 0, 1, 2 and 3 +refer to the top left, top right, bottom left and bottom right +respectively. The default is 2, which is the bottom left corner.
+
convert : str, one of ['fm','pm','Å' or A,'nm','µm' or 'um','mm','cm','dm','m'], optional
+
Unit that will be used for the scale bar, the value will be +automatically converted if this unit differs from the pixel size +unit. The default is None, which uses micrometers.
+
barcolor : tuple of ints, optional
+
RGB color to use for the scalebar and text, given as a tuple of +form (R,G,B) or (R,G,B,A) where R, G B and A are values between 0 +and 255 for red, green, blue and alpha respectively. The default is +(255,255,255), which gives a white scalebar.
+
barthickness : int, optional
+
thickness in printer points of the scale bar itself. The default is +16.
+
barpad : int, optional
+
size in printer points of the padding between the scale bar and the +surrounding box. The default is 10.
+
draw_text : bool, optional
+
whether to draw the text specified in text on the image, the text +is place above the scale bar if draw_bar=True. The default is +True.
+
text : str, optional
+
the text to draw on the image (above the scale bar if +draw_bar=True). The default is None, which gives the size and +unit of the scale bar (e.g. '10 µm').
+
font : str, optional
+
filename of an installed TrueType font ('.ttf' file) to use for the +text on the scalebar. The default is 'arialbd.ttf'.
+
fontsize : int, optional
+
base font size to use for the scale bar text. The default is 16. +Note that this size will be re-scaled according to resolution and +scale.
+
fontcolor : tuple of int, optional
+
(R,G,B) tuple where R, G and B are red, green and blue values from +0 to 255. The default is (255,255,255) giving white text.
+
fontbaseline : int, optional
+
vertical offset for the baseline of the scale bar text in from the +top of the scale bar in printer points. The default is 10.
+
fontpad : int, optional
+
minimum size in printer points of the space/padding between the +text and surrounding box. The default is 10.
+
draw_box : bool, optional
+
Whether to put a colored box behind the scalebar and text to +enhance contrast on busy images. The default is False.
+
boxcolor : tuple of ints, optional
+
RGB color to use for the box behind/around the scalebar and text, +given as a tuple of form (R,G,B) or (R,G,B,A) where R, G B and A +are values between 0 and 255 for red, green and blue respectively. +If no A is given, boxopacity is used. The default is (0,0,0) +which gives a black box.
+
boxopacity : int, optional
+
value between 0 and 255 for the opacity/alpha of the box, useful +for creating a semitransparent box. The default is 255.
+
boxpad : int, optional
+
size of the space/padding around the box (with respect to the sides +of the image) in printer points. The default is 10.
+
save : bool, optional
+
whether to save the image as file. The default is True.
+
show_figure : bool, optional
+
whether to open matplotlib figure windows. The default is True.
+
+

Returns

+
+
Y×X×4 numpy.array containing the BGRA pixel data
+
 
+
+
+
+def get_metadata(self) +
+
+

loads OME metadata from visitech .ome.tif file and returns xml tree +object

+

Returns

+
+
xml.etree.ElementTree
+
formatted XML metadata. Can be indexed with +xml_root.find('')
+
+
+
+def get_pixelsize(self) +
+
+

shortcut to get (z,y,x) pixelsize with unit

+
+
+def get_series_name(self) +
+
+

Returns a name for the series based on the filename.

+

Returns

+
+
str
+
 
+
+
+
+def get_timestamps(self, load_stack_indices=False) +
+
+

loads OME metadata from visitech .ome.tif file and returns timestamps

+

Parameters

+
+
load_stack_indices : boolean
+
if True, only returns timestamps from frames which were loaded +at call to visitech_faststack.load_stack(), and using the same +dimension order / stack shape
+
+

Returns

+
+
times : numpy (nd)array of floats
+
list/stack of timestamps for each of the the frames in the data
+
+
+
+def load_data(self, indices=slice(None, None, None), dtype=numpy.uint16, xslice=None, yslice=None) +
+
+

load images from datafile into 3D numpy array

+

Parameters

+
+
indices : slice object or list of ints, optional
+
which images from tiffstack to load. The default is +slice(None,None,None).
+
+

Returns

+
+
numpy.ndarray containing image data in dim order (im,y,x)
+
 
+
+
+
+def load_stack(self, dim_range={}, dtype=numpy.uint16, remove_backsteps=True, offset=0, force_reshape=False) +
+
+

Load the data and reshape into 4D stack with the following dimension +order: ('time','z-axis','y-axis','x-axis')

+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range.

+

Parameters

+
+
dim_range : dict, optional
+
+

dict, with keys corresponding to channel/dimension labels as above +and slice objects as values. This allows you to only load part of +the data along any of the dimensions, such as only loading two +time steps or a particular z-range. An example use for only taking +time steps up to 5 and z-slice 20 to 30 would +be:

+
dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}.
+
+

The default is {} which corresponds to the full file.

+
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint16.
+
remove_backsteps : bool
+
whether to discard the frames which were recorded on the backsteps +downwards
+
offset : int
+
offset the indices by a constant number of frames in case the first +im is not the first slice of the first stack
+
force_reshape : bool
+
in case of incorrect number of steps during acquisition, you can +use this to ignore the reshape-error occuring upon trying to sort +2d images into 4d stack series
+
+

Returns

+
+
data : numpy.ndarray
+
ndarray with the pixel values
+
+
+
+def save_stack(self, data, filename_prefix='visitech_faststack', sequence_type='multipage') +
+
+

save stacks to tiff files

+

Parameters

+
+
data : numpy ndarray with 3 or 4 dimensions
+
image series pixel values with dimension order (z,y,x) or (t,z,y,x)
+
filename_prefix : string, optional
+
prefix to use for filename. The time/z-axis index is appended if +relevant. The default is 'visitech_faststack'.
+
sequence_type : {'multipage','image_sequence','multipage_sequence'}, optional
+
+

The way to store the data. The following options are available:

+
- 'image_sequence' : stores as a series of 2D images with time 
+and or frame number appended
+- 'multipage' : store all data in a single multipage tiff file
+- 'multipage_sequence' : stores a multipage tiff file for each 
+time step
+
+

The default is 'multipage'.

+
+
+

Returns

+

None, but writes file(s) to working directory.

+
+
+def yield_stack(self, dim_range={}, dtype=numpy.uint16, remove_backsteps=True, offset=0, force_reshape=False) +
+
+

Lazy-load the data and reshape into 4D stack with the following +dimension order: ('time','z-axis','y-axis','x-axis'). Returns a +generator which yields a z-stack for each call, which is loaded upon +calling it.

+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range. +The shape of the stack can be accessed without loading data using the +stack_shape attribute after creating the yield_stack object.

+

Parameters

+
+
dim_range : dict, optional
+
+

dict, with keys corresponding to channel/dimension labels as above +and slice objects as values. This allows you to only load part of +the data along any of the dimensions, such as only loading two +time steps or a particular z-range. An example use for only taking +time steps up to 5 and z-slice 20 to 30 would +be:

+
dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}.
+
+

The default is {} which corresponds to the full file.

+
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint16.
+
remove_backsteps : bool
+
whether to discard the frames which were recorded on the backsteps +downwards
+
offset : int
+
offset the indices by a constant number of frames in case the first +im is not the first slice of the first stack
+
force_reshape : bool
+
in case of incorrect number of steps during acquisition, you can +use this to ignore the reshape-error occuring upon trying to sort +2d images into 4d stack series
+
+

Returns

+
+
zstack : iterable/generator yielding numpy.ndarray
+
list of time steps, with for each time step a z-stack as np.ndarray +with the pixel values
+
+
+
+
+
+class visitech_series +(filename, magnification=63, binning=1) +
+
+

Functions for image series taken with the multi-D acquisition menu in +MicroManager with the Visitech saved to multipage .ome.tiff files. For the +custom fast stack sequence use visitech_faststack.

+

initialize class (lazy-loads data)

+

Parameters

+
+
filenames : string
+
name of first ome.tiff file (extension optional)
+
magnification : float, optional
+
magnification of objective lens used. The default is 63.
+
binning : int
+
binning factor performed at the detector level, e.g. in +MicroManager software, in XY
+
+

Methods

+
+
+def export_with_scalebar(self, frame=0, filename=None, **kwargs) +
+
+

saves an exported image of the confocal slice with a scalebar in one of +the four corners, where barsize is the scalebar size in data units +(e.g. µm) and scale the overall size of the scalebar and text with +respect to the width of the image. Additionally, a colormap is applied +to the data for better visualisation.

+

Parameters

+
+
frame : int, optional
+
index of the frame to export. The default is 0.
+
filename : string or None, optional
+
Filename + extension to use for the export file. The default is the +filename sans extension of the original TEM file, with +'_exported.png' appended.
+
crop : tuple or None, optional
+
+

range describing a area of the original image (before rescaling the +resolution) to crop out for the export image. Can have two forms:

+
    +
  • +

    ((xmin,ymin),(xmax,ymax)), with the integer indices of the top +left and bottom right corners respectively.

    +
  • +
  • +

    (xmin,ymin,w,h) with the integer indices of the top left corner +and the width and heigth of the cropped image in pixels (prior to +optional rescaling using resolution).

    +
  • +
+

The default is None which takes the entire image.

+
+
crop_unit : 'pixels' or 'data', optional
+
sets the unit in which the width and height in crop are +specified when using the (x,y,w,h) format, with 'pixels' to give +the size in pixels or 'data' to specify the size in the physical +unit used for the scalebar (after optional unit conversion via the +convert parameter). Note that the position of the top left corner +is given in pixels. The ((xmin,ymin),(xmax,ymax)) format must be +always given in pixels, and crop_unit is ignored if crop is +given in this format. The default is 'pixels'.
+
resolution : int, optional
+
the resolution along the x-axis (i.e. image width in pixels) to use +for the exported image. The default is None, which uses the size +of the original image (after optional cropping using crop).
+
cmap : str or callable or list of str or list of callable, optional
+
+

name of a named Matplotlib colormap used to color the data. see the +Matplotlib documentation +for more information. The default is 'inferno'.

+

In addition to the colormaps listed there, the following maps for +linearly incrementing pure RGB channels are available, useful for +e.g. displaying multichannel data with complementary colors (no +overlap between between colormaps possible): +['pure_reds', 'pure_greens', 'pure_blues', 'pure_yellows', +'pure_cyans', 'pure_purples','pure_greys'] +where for example 'pure_reds' scales between RGB values (0,0,0) +and +(255,0,0), and 'pure_cyans' between (0,0,0) and +(0,255,255).

+

Alternatively, a fully custom colormap may be used by entering a +ListedColormap +or LinearSegmentedColormap +object from the Matplotlib.colors module. For more information on +creating colormaps, see the Matplotlib documentation linked above.

+

For multichannel data, a list of colormaps must be provided, with +a separate colormap for each channel.

+
+
cmap_range : tuple of form (min,max) or None or 'automatic', optional
+
sets the scaling of the colormap. The minimum and maximum +values to map the colormap to, values outside of this range will +be colored according to the min and max value of the colormap. The +default is +None, which is to take the lowest and highest value +in the image. Alternatively 'automatic' may be specified which +scales between the 10th and 99th percentile. For multichannel data +a list of cmap_range options per channel may be provided.
+
draw_bar : boolean, optional
+
whether to draw a scalebar on the image, such that this function +may be used to put other text on the image or just to apply a +colormap (by setting draw_bar=False and draw_text=False). The +default is True.
+
barsize : float or None, optional
+
size (in data units matching the original scale bar, e.g. nm) of +the scale bar to use. The default None, wich takes the desired +length for the current scale (ca. 15% of the width of the image for +scale=1) and round this to the nearest option from a list of +"nice" values.
+
scale : float, optional
+
factor to change the size of the scalebar+text with respect to the +width of the image. Scale is chosen such, that at scale=1 the +font size of the scale bar text is approximately 10 pt when +the image is printed at half the width of the text in a typical A4 +paper document (e.g. two images side-by-side). Note that this is +with respect to the output image, so after optional cropping +and/or up/down sampling has been applied. The default is 1.
+
loc : int, one of [0,1,2,3], optional
+
Location of the scalebar on the image, where 0, 1, 2 and 3 +refer to the top left, top right, bottom left and bottom right +respectively. The default is 2, which is the bottom left corner.
+
convert : str, one of ['fm','pm','Å' or A,'nm','µm' or 'um','mm','cm','dm','m'], optional
+
Unit that will be used for the scale bar, the value will be +automatically converted if this unit differs from the pixel size +unit. The default is None, which uses micrometers.
+
barcolor : tuple of ints, optional
+
RGB color to use for the scalebar and text, given as a tuple of +form (R,G,B) or (R,G,B,A) where R, G B and A are values between 0 +and 255 for red, green, blue and alpha respectively. The default is +(255,255,255), which gives a white scalebar.
+
barthickness : int, optional
+
thickness in printer points of the scale bar itself. The default is +16.
+
barpad : int, optional
+
size in printer points of the padding between the scale bar and the +surrounding box. The default is 10.
+
draw_text : bool, optional
+
whether to draw the text specified in text on the image, the text +is place above the scale bar if draw_bar=True. The default is +True.
+
text : str, optional
+
the text to draw on the image (above the scale bar if +draw_bar=True). The default is None, which gives the size and +unit of the scale bar (e.g. '10 µm').
+
font : str, optional
+
filename of an installed TrueType font ('.ttf' file) to use for the +text on the scalebar. The default is 'arialbd.ttf'.
+
fontsize : int, optional
+
base font size to use for the scale bar text. The default is 16. +Note that this size will be re-scaled according to resolution and +scale.
+
fontcolor : tuple of int, optional
+
(R,G,B) tuple where R, G and B are red, green and blue values from +0 to 255. The default is (255,255,255) giving white text.
+
fontbaseline : int, optional
+
vertical offset for the baseline of the scale bar text in from the +top of the scale bar in printer points. The default is 10.
+
fontpad : int, optional
+
minimum size in printer points of the space/padding between the +text and surrounding box. The default is 10.
+
draw_box : bool, optional
+
Whether to put a colored box behind the scalebar and text to +enhance contrast on busy images. The default is False.
+
boxcolor : tuple of ints, optional
+
RGB color to use for the box behind/around the scalebar and text, +given as a tuple of form (R,G,B) or (R,G,B,A) where R, G B and A +are values between 0 and 255 for red, green and blue respectively. +If no A is given, boxopacity is used. The default is (0,0,0) +which gives a black box.
+
boxopacity : int, optional
+
value between 0 and 255 for the opacity/alpha of the box, useful +for creating a semitransparent box. The default is 255.
+
boxpad : int, optional
+
size of the space/padding around the box (with respect to the sides +of the image) in printer points. The default is 10.
+
save : bool, optional
+
whether to save the image as file. The default is True.
+
show_figure : bool, optional
+
whether to open matplotlib figure windows. The default is True.
+
+

Returns

+
+
Y×X×4 numpy.array containing the BGRA pixel data
+
 
+
+
+
+def get_dimension_steps(self, dim, use_stack_indices=False) +
+
+

return a list of physical values along a certain dimension, e.g. +the x-coordinates or timesteps.

+
+
+def get_image_metadata(self, indices=slice(None, None, None)) +
+
+

loads the part of the metadata containing information about the time, +position etc. for each frame of the series and returns a dataframe +indexes by image frame

+

Parameters

+
+
indices : slice object, optional
+
which image frames to load the metadata for. The default is all +frames.
+
+

Returns

+
+
imagedata : pandas.DataFrame
+
the metadata for the images, indexed by frame number.
+
+
+
+def get_metadata(self) +
+
+

loads OME metadata from visitech .ome.tif file and returns xml tree +object

+

Returns

+
+
xml.etree.ElementTree
+
formatted XML metadata. Can be indexed with +xml_root.find('')
+
+
+
+def get_metadata_dimensions(self) +
+
+

finds the stack's dimensionality and logical shape based on the +embedded metadata

+

Returns

+
+
shape : tuple of ints
+
logical sizes of the stack
+
dimorder : tuple of strings
+
order of the dimensions corresponding to the shape
+
+
+
+def get_pixelsize(self) +
+
+

shortcut to get (z,y,x) pixelsize with unit

+
+
+def get_series_name(self) +
+
+

Returns a name for the series based on the filename.

+

Returns

+
+
str
+
 
+
+
+
+def load_data(self, indices=slice(None, None, None), dtype=numpy.uint16) +
+
+

load images from datafile into 3D numpy array

+

Parameters

+
+
indices : slice object or list of ints, optional
+
which images from tiffstack to load. The default is +slice(None,None,None).
+
dtype : np int datatype
+
data type / bit depth to rescale data to.
+
+

Returns

+
+
numpy.ndarray containing image data in dim order (im,y,x)
+
 
+
+
+
+def load_stack(self, dim_range={}, dtype=numpy.uint16) +
+
+

Load the data and reshape into 4D stack with the following dimension +order: ('channel','time','z-axis','y-axis','x-axis') where dimensions +with len 1 are omitted.

+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range.

+

Parameters

+
+
dim_range : dict, optional
+
dict, with keys corresponding to channel/dimension labels as above +and slice objects as values. This allows you to only load part of +the data along any of the dimensions, such as only loading two +time steps or a particular z-range. An example use for only taking +time steps up to 5 and z-slice 20 to 30 would +be: +dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}. +The default is {} which corresponds to the full file.
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint16.
+
remove_backsteps : bool
+
whether to discard the frames which were recorded on the backsteps +downwards
+
+

Returns

+
+
data : numpy.ndarray
+
ndarray with the pixel values
+
+
+
+def yield_stack(self, dim_range={}, dtype=numpy.uint16, remove_backsteps=True) +
+
+

Lazy-load the data and reshape into 4D stack with the following +dimension order: ('time','z-axis','y-axis','x-axis'). Returns a +generator which yields a z-stack for each call, which is loaded upon +calling it.

+

For loading only part of the total dataset, the dim_range parameter can +be used to specify a range along any of the dimensions. This will be +more memory efficient than loading the entire stack and then discarding +part of the data. For slicing along the x or y axis this is not +possible and whole (xy) images must be loaded prior to discarding +data outside the specified x or y axis range. +The shape of the stack can be accessed without loading data using the +stack_shape attribute after creating the yield_stack object.

+

Parameters

+
+
dim_range : dict, optional
+
+

dict, with keys corresponding to channel/dimension labels as above +and slice objects as values. This allows you to only load part of +the data along any of the dimensions, such as only loading two +time steps or a particular z-range. An example use for only taking +time steps up to 5 and z-slice 20 to 30 would +be:

+
dim_range={'time':slice(None,5), 'z-axis':slice(20,30)}.
+
+

The default is {} which corresponds to the full file.

+
+
dtype : (numpy) datatype, optional
+
type to scale data to. The default is np.uint16.
+
remove_backsteps : bool
+
whether to discard the frames which were recorded on the backsteps +downwards
+
+

Returns

+
+
zstack : iterable/generator yielding numpy.ndarray
+
list of time steps, with for each time step a z-stack as np.ndarray +with the pixel values
+
+
+
+
+
+
+
+ +
+ + + \ No newline at end of file diff --git a/util.html b/util.html new file mode 100644 index 0000000..5a76c3f --- /dev/null +++ b/util.html @@ -0,0 +1,584 @@ + + + + + + +scm_confocal.util API documentation + + + + + + + + + + + +
+
+
+

Module scm_confocal.util

+
+
+
+
+
+
+
+
+

Functions

+
+
+def align_stack(images, startim=0, threshold=0, binning=1, smooth=0, upsample=1, startoffset=(0, 0), trim=True, blocksize=None, show_process_im=False) +
+
+

Cross correlation alignment of image stack. Based around +skimage.feature.register_translation which enables sub-pixel precise +translation of images.

+

When preprocessing (smoothing and/or binning and/or thresholding) is used, +a copy of the data is created and used for determining the image shift, but +the original (unprocessed) data is corrected for image shift and returned.

+

order of preprocessing is first thresholding, then binning, then smoothing

+

Parameters

+
+
images : 3d numpy array
+
the dataset which will be aligned along the first dimension (e.g. z)
+
startim : int
+
starting index that acts as reference for rest of stack
+
threshold : float
+
any pixel value below threshold is set to 0 before alignment
+
binning : int
+
factor to bin pixels in (x,y)
+
smooth : float
+
size of gaussian kernal for smoothing prior to calculating the +translation
+
upsample : int
+
precision of translation in units of 1/pixel
+
startoffset : tuple of floats (y,x)
+
shift to apply to the starting image before alignment
+
+

Returns

+
+
images : numpy.array
+
the image data with translation and (optional) trimming applied
+
shifts : list of (y,x) tuples
+
image shift values for each image in the dataset
+
+
+
+def average_nearest_neighbour_distance(features, pos_cols=['x (um)', 'y (um)', 'z (um)']) +
+
+

finds average distance of nearest neighbours from pandas array of +coordinates.

+

Parameters

+
+
features : pandas DataFrame
+
dataframe containing the particle coordinates
+
pos_cols : list of strings, optional
+
Names of columns to use for coordinates. The default is +['x (um)','y (um)','z (um)'].
+
+

Returns

+
+
float
+
average distance to the closest particle for all the pairs in the +set
+
+
+
+def bin_stack(images, n=1, blocksize=None, quiet=False, dtype=None) +
+
+

bins numpy ndarrays in arbitrary dimensions by a factor n. Prior to +binning, elements from the end are deleted until the length is a +multiple of the bin factor. Executes averaging of bins in floating +point precision, which is memory intensive for large stacks. Using +smaller blocks reduces memory usage, but is less efficient.

+

Parameters

+
+
images : numpy.ndarray
+
ndarray containing the data
+
n : int or tuple of int, optional
+
factor to bin with for all dimensions (int) or each dimension +individually (tuple with one int per dimension). The default is 1.
+
blocksize : int, optional
+
number of (binned) slices to process at a time to conserve memory. +The default is entire stack.
+
quiet : bool, optional
+
suppresses printed output when True. The default is False.
+
dtype : (numpy) datatype, optional
+
datatype to use for output. Averaging of the binned pixels always +occurs in floating point precision. The default is np.uint8.
+
+

Returns

+
+
images : numpy.ndarray
+
binned stack
+
+
+
+def fit_powerlaw(x, y, weights=None, **kwargs) +
+
+

Linear regression in log space of the MSD to get diffusion constant, which +is a powerlaw in linear space of the form Ax*n

+

Parameters

+
+
x : list or numpy.array
+
x coordinates of data points to fit
+
y : list or numpy.array
+
y coordinates of data points to fit
+
weights : list or numpy.array, optional
+
list of weights to use for each (x,y) coordinate. The default is +None.
+
+

**kwargs : +arguments passed to scipy.optimize.curve_fit

+

Returns

+
+
A : float
+
constant A
+
n : float
+
power n
+
sigmaA : float
+
standard deviation in A
+
sigmaN : float
+
standard deviation in n
+
+
+
+def flatfield_correction_apply(images, corrim, dtype=None, check_overflow=True) +
+
+

Apply a correction to all images in a dataset based on a mask / +correction image such as provided by util.flatfield_correction_init. +Pixel values are divided by the correction image, accounting for +integer overflow by clipping to the max value of the (integer) dtype.

+

Note that overflow checking is currently implemented using numpy masked +arrays, which are extremely slow (up to 10x) when compared to normal +numpy arrays. It can be bypassed using check_overflow for a memory and +performance improvement.

+

Parameters

+
+
images : (sequence of) numpy.array
+
the images to correct. he last two dimensions are taken as the 2D +images, other dimensions are preserved. Must have 2 or more dims.
+
corrim : numpy.array
+
The correction image to apply. Must have 2 or more dimensions, if +there are more than 2 it must match images according to numpy +broadcasting rules.
+
dtype : data type, optional
+
data type used for the output. The default is images.dtype.
+
check_overflow : bool, optional
+
Whether to check and avoid integer overflow. The default is True.
+
+

Returns

+
+
numpy.array
+
the corrected image array
+
+

See Also

+

flatfield_correction_init()

+
+
+def flatfield_correction_init(images, kernelsize, average=True) +
+
+

Provides a correction image for inhomogeneous illumination based on low +frequency fourier components. Particularly useful for data from the +Visitech recorded at relatively large frame size / low imaging rate.

+

Parameters

+
+
images : (sequence of) numpy array with >= 2 dimensions
+
image(s) to calculate a correction image for. The last two +dimensions are taken as the 2D images.
+
kernelsize : int
+
cutoff size in fourier-space pixels (i.e. cycles per image-size) of +cone-shaped low-pass fourier filter.
+
average : bool, optional
+
whether to average correction images along the first dimension of +the supplied data. Requires >2 dimensions in the input data. The +default is True.
+
+

Returns

+
+
numpy array
+
(array of) normalized correction images where the maximum is scaled +to 1.
+
+

See Also

+

flatfield_correction_apply()

+
+
+def mean_square_displacement(features, pos_cols=['x', 'y', 'z'], t_col='t (s)', nparticles=None, pickrandom=False, bins=20, tmin=None, tmax=None, itmin=1, itmax=None, parallel=False, cores=None, linear_sampling=False) +
+
+

calculate the mean square displacement vs time for linked particles

+

Parameters

+
+
features : pandas.DataFrame
+
output from trackpy.link containing tracking data
+
pos_cols : list of str, optional
+
names of columns to use for coordinates. The default is +['x','y','z'].
+
t_col : str, optional
+
name of column containing timestamps. The default is 't (s)'.
+
nparticles : int, optional
+
number of particles to use for calculation (useful for large +datasets). The default is all particles.
+
pickrandom : bool, optional
+
whether to pick nparticles randomly or not, if False it takes the +n longest tracked particles from data. The default is False.
+
bins : int or sequence of floats, optional
+
number of bins or bin edges for output. The default is 20.
+
tmin : float, optional
+
left edge of first bin. The default is min(t_col).
+
tmax : float, optional
+
 
+
right edge of last bin, The default is max(t_col).
+
itmin : int, optional
+
minimum (integer) step size in timesteps. The default is 1.
+
itmax : int, optional
+
maximum (integer) step size in timesteps. The default is no limit.
+
parallel : bool, optional
+
whether to use the parallelized implementation. Requires rest of +the code to be protected in a if name == 'main' block. The +default is False.
+
cores : int, optional
+
the number of cores to use when using the parallelized +implementation. When parallel=False this option is ignored
+
+

Returns

+
+
binedges : numpy.array
+
edges of time bins
+
bincounts : numpy.array
+
number of sampling points for each bin
+
binmeans : numpy.array
+
mean square displacement values
+
+
+
+def mean_square_displacement_legacy(features, pos_cols=['x', 'y', 'z'], t_col='t (s)', nparticles=None, pickrandom=False, nbins=20, tmin=None, tmax=None, itmin=1, itmax=None, parallel=False, cores=None) +
+
+

calculate the mean square displacement vs time for linked particles

+

Parameters

+
+
features : pandas.DataFrame
+
output from trackpy.link containing tracking data
+
pos_cols : list of str, optional
+
names of columns to use for coordinates. The default is +['x','y','z'].
+
t_col : str, optional
+
name of column containing timestamps. The default is 't (s)'.
+
nparticles : int, optional
+
number of particles to use for calculation (useful for large +datasets). The default is all particles.
+
pickrandom : bool, optional
+
whether to pick nparticles randomly or not, if False it takes the +n longest tracked particles from data. The default is False.
+
nbins : int, optional
+
number of bins for output. The default is 20.
+
tmin : float, optional
+
left edge of first bin. The default is min(t_col).
+
tmax : float, optional
+
 
+
right edge of last bin, The default is max(t_col).
+
itmin : int, optional
+
minimum (integer) step size in timesteps. The default is 1.
+
itmax : int, optional
+
maximum (integer) step size in timesteps. The default is no limit.
+
parallel : bool, optional
+
whether to use the parallelized implementation. Requires rest of +the code to be protected in a if name == 'main' block. The +default is False.
+
cores : int, optional
+
the number of cores to use when using the parallelized +implementation. When parallel=False this option is ignored
+
+

Returns

+
+
binedges : numpy.array
+
edges of time bins
+
bincounts : numpy.array
+
number of sampling points for each bin
+
binmeans : numpy.array
+
mean square displacement values
+
+
+
+def mean_square_displacement_per_frame(features, pos_cols=['x', 'y'], feat_col='particle') +
+
+

Calculate the mean square movement of all tracked features between +subsequent frames using efficient pandas linear algebra

+

Parameters

+
+
features : pandas.Dataframe
+
dataframe containing the tracking data over timesteps indexed by +frame number and containing coordinates of features.
+
pos_cols : list of str, optional
+
names of the columns containing coordinates. The default is +['x','y'].
+
feat_col : str
+
name of column containing feature identifyers. The default is +'particle'.
+
+

Returns

+
+
msd : numpy.array
+
averages of the squared displacements between each two steps
+
+
+
+def multiply_intensity(data, factor, dtype=None) +
+
+

For multiplying the values of a numpy array while accounting for +integer overflow issues in integer datatypes. Corrected values larger +than the datatype max are set to the max value.

+

Parameters

+
+
data : numpy.ndarray
+
array containing the data values
+
factor : float
+
factor to multiply data with
+
dtype : (numpy) datatype, optional
+
Datatype to scale data to. The default is the same type as the +input data.
+
+

Returns

+
+
data : numpy.ndarray
+
data with new intensity values.
+
+
+
+def pair_correlation_2d(features, rmin=0, rmax=10, dr=None, ndensity=None, boundary=None, column_headers=['y', 'x'], periodic_boundary=False, handle_edge=True) +
+
+

calculates g(r) via a 'conventional' distance histogram method for a +set of 2D coordinate sets. Edge correction is fully analytic.

+

Parameters

+
+
features : pandas DataFrame or numpy.ndarray
+
contains coordinates in (y,x)
+
rmin : float, optional
+
lower bound for the pairwise distance, left edge of 0th bin. The +default is 0.
+
rmax : float, optional
+
upper bound for the pairwise distance, right edge of last bin. The +default is 10.
+
dr : float, optional
+
bin width for the pairwise distance bins. The default is +(rmax-rmin)/20.
+
ndensity : float, optional
+
number density of particles in sample. The default is None which +computes the number density from the input data.
+
boundary : array-like, optional
+
positions of the walls that define the bounding box of the +coordinates, given as +(ymin,ymax,xmin,xmax). The +default is the min and max values in the dataset along each +dimension.
+
column_headers : list of string, optional
+
column labels which contain the coordinates to use in case features +is given as a pandas.DataFrame. The default is [y','x'].
+
periodic_boundary : bool, optional
+
whether periodic boundary conditions are used. The default is +False.
+
handle_edge : bool, optional
+
whether to correct for edge effects in non-periodic boundary +conditions. The default is True.
+
+

Returns

+
+
edges : numpy.array
+
edges of the bins in r
+
counts : numpy.array
+
normalized count values in each bin of the g(r)
+
+
+
+def pair_correlation_3d(features, rmin=0, rmax=10, dr=None, ndensity=None, boundary=None, column_headers=['z', 'y', 'x'], periodic_boundary=False, handle_edge=True) +
+
+

calculates g(r) via a 'conventional' distance histogram method for a +set of 3D coordinate sets. Edge correction is fully analytic and based +on refs [1] and [2].

+

Parameters

+
+
features : pandas DataFrame or numpy.ndarray
+
contains coordinates in (z,y,x)
+
rmin : float, optional
+
lower bound for the pairwise distance, left edge of 0th bin. The +default is 0.
+
rmax : float, optional
+
upper bound for the pairwise distance, right edge of last bin. The +default is 10.
+
dr : float, optional
+
bin width for the pairwise distance bins. The default is +(rmax-rmin)/20.
+
ndensity : float, optional
+
number density of particles in sample. The default is None which +computes the number density from the input data.
+
boundary : array-like, optional
+
positions of the walls that define the bounding box of the +coordinates, given as +(zmin,zmax,ymin,ymax,xmin,xmax). The +default is the min and max values in the dataset along each +dimension.
+
column_headers : list of string, optional
+
column labels which contain the coordinates to use in case features +is given as a pandas.DataFrame. The default is ['z','y','x'].
+
periodic_boundary : bool, optional
+
whether periodic boundary conditions are used. The default is +False.
+
handle_edge : bool, optional
+
whether to correct for edge effects in non-periodic boundary +conditions. The default is True.
+
+

Returns

+
+
edges : numpy.array
+
edges of the bins in r
+
counts : numpy.array
+
normalized count values in each bin of the g(r)
+
+

References

+

[1] Markus Seserno (2014). How to calculate a three-dimensional g(r) +under periodic boundary conditions. +https://www.cmu.edu/biolphys/deserno/pdf/gr_periodic.pdf

+

[2] Kopera, B. A. F., & Retsch, M. (2018). Computing the 3D Radial +Distribution Function from Particle Positions: An Advanced Analytic +Approach. Analytical Chemistry, 90(23), 13909–13914. +https://doi.org/10.1021/acs.analchem.8b03157

+
+
+def plot_stack_histogram(images, bin_edges=range(0, 256), newfig=True, legendname=None, title='intensity histogram', **kwargs) +
+
+

manually flattens list of images to list of pixel values and plots +histogram. Can combine multiple calls with newfig and legendname +options

+

Parameters

+
+
images : numpy ndarray
+
array containing pixel values
+
bin_edges : list or range, optional
+
edges of bins to use. The default is range(0,256).
+
newfig : bool, optional
+
Whether to open a new figure or to add to currently active figure. +The default is True.
+
legendname : string, optional
+
label to use for the legend. The default is None.
+
title : string, optional
+
text to use as plot title. The default is 'intensity histogram'.
+
+

Returns

+
+
pyplot figure handle
+
 
+
+
+
+def saveprompt(question='Save/overwrite? 1=YES, 0=NO. ') +
+
+

Asks user a question (whether to save). If 1 is entered, it returns +True, for any other answer it returns False

+

Parameters

+
+
question : string
+
The question to prompt the user for
+
+

Returns

+
+
save : bool
+
whether to save
+
+
+
+def subtract_background(images, val=0, percentile=False) +
+
+

subtract a constant value from a numpy array without going below 0

+

Parameters

+
+
images : numpy ndarray
+
images to correct.
+
percentile : bool, optional
+
Whether to give the value as a percentile of the stack rather than +an absolute value to subtrackt. The default is False.
+
val : int or float, optional
+
Value or percentile to subtract. The default is 0.
+
+

Returns

+
+
images : numpy ndarray
+
the corrected stack.
+
+
+
+def write_textfile(params, filename='parameters.txt') +
+
+

stores parameter names and values in text file

+

Parameters

+
+
params : dictionary of name:value
+
the data to store
+
filename : str, optional
+
file name to us for saving. The default is "parameters.txt".
+
+

Returns

+

None.

+
+
+
+
+
+
+ +
+ + + \ No newline at end of file