aicssegmentation.core package

Submodules

aicssegmentation.core.MO_threshold module

aicssegmentation.core.MO_threshold.MO(structure_img_smooth: numpy.ndarray, global_thresh_method: str, object_minArea: int, extra_criteria: bool = False, local_adjust: float = 0.98, return_object: bool = False, dilate: bool = False)[source]

Implementation of “Masked Object Thresholding” algorithm. Specifically, the algorithm is a hybrid thresholding method combining two levels of thresholds. The steps are [1] a global threshold is calculated, [2] extract each individual connected componet after applying the global threshold, [3] remove small objects, [4] within each remaining object, a local Otsu threshold is calculated and applied with an optional local threshold adjustment ratio (to make the segmentation more and less conservative). An extra check can be used in step [4], which requires the local Otsu threshold larger than 1/3 of global Otsu threhsold and otherwise this connected component is discarded.

structure_img_smooth: np.ndarray

the image (should have already been smoothed) to apply the method on

global_thresh_method: str

which method to use for calculating global threshold. Options include: “triangle” (or “tri”), “median” (or “med”), and “ave_tri_med” (or “ave”). “ave” refers the average of “triangle” threshold and “mean” threshold.

object_minArea: int

the size filter for excluding small object before applying local threshold

extra_criteria: bool

whether to use the extra check when doing local thresholding, default is False

local_adjust: float

a ratio to apply on local threshold, default is 0.98

return_object: bool

whether to return the global thresholding results in order to obtain the individual objects the local thresholding is made on

dilate: bool

whether to perform dilation on bw_low_level prior to the high level threshold

a binary nd array of the segmentation result

aicssegmentation.core.MO_threshold.MO_high_level(structure_img_smooth: numpy.ndarray, bw_low_level: numpy.ndarray, extra_criteria: bool = False, local_adjust: float = 0.98) → numpy.ndarray[source]

Implementation of “Masked Object Thresholding” algorithm. Specifically, the algorithm is a hybrid thresholding method combining two levels of thresholds. The steps are [1] a global threshold is calculated, [2] extract each individual connected componet after applying the global threshold, [3] remove small objects, [4] within each remaining object, a local Otsu threshold is calculated and applied with an optional local threshold adjustment ratio (to make the segmentation more and less conservative). An extra check can be used in step [4], which requires the local Otsu threshold larger than 1/3 of global Otsu threhsold and otherwise this connected component is discarded. This function implements the high level part.

structure_img_smooth: np.ndarray

the image (should have already been smoothed) to apply the method on

bw_low_level: np.ndarray

low level segmentation

extra_criteria: bool

whether to use the extra check when doing local thresholding, default is False

local_adjust: float

a ratio to apply on local threshold, default is 0.98

a binary nd array of the segmentation result

aicssegmentation.core.MO_threshold.MO_low_level(structure_img_smooth: numpy.ndarray, global_thresh_method: str, object_minArea: int, dilate: bool = False) → numpy.ndarray[source]

Implementation of “Masked Object Thresholding” algorithm. Specifically, the algorithm is a hybrid thresholding method combining two levels of thresholds. The steps are [1] a global threshold is calculated, [2] extract each individual connected componet after applying the global threshold, [3] remove small objects, [4] within each remaining object, a local Otsu threshold is calculated and applied with an optional local threshold adjustment ratio (to make the segmentation more and less conservative). An extra check can be used in step [4], which requires the local Otsu threshold larger than 1/3 of global Otsu threhsold and otherwise this connected component is discarded. This function implements the low level part.

structure_img_smooth: np.ndarray

the image (should have already been smoothed) to apply the method on

global_thresh_method: str

which method to use for calculating global threshold. Options include: “triangle” (or “tri”), “median” (or “med”), and “ave_tri_med” (or “ave”). “ave” refers the average of “triangle” threshold and “mean” threshold.

object_minArea: int

the size filter for excluding small object before applying local threshold

dilate: bool

whether to perform dilation on bw_low_level prior to the high level threshold

a binary nd array of the segmentation result

aicssegmentation.core.hessian module

aicssegmentation.core.hessian.absolute_3d_hessian_eigenvalues(nd_array: numpy.ndarray, sigma: float = 1, scale: bool = True, whiteonblack: bool = True)[source]

Eigenvalues of the hessian matrix calculated from the input array sorted by absolute value.

nd_array: np.ndarray

nd array from which to compute the hessian matrix.

sigma: float

Standard deviation used for the Gaussian kernel to smooth the array. Defaul is 1

scale: bool

whether the hessian elements will be scaled by sigma squared. Default is True

whiteonblack: boolean

image is white objects on black blackground or not. Default is True

list of eigenvalues [eigenvalue1, eigenvalue2, …]

aicssegmentation.core.hessian.compute_3d_hessian_matrix(nd_array: numpy.ndarray, sigma: float = 1, scale: bool = True, whiteonblack: bool = True) → numpy.ndarray[source]

Computes the hessian matrix for an nd_array. The implementation was adapted from: https://github.com/ellisdg/frangi3d/blob/master/frangi/hessian.py

nd_array: np.ndarray

nd array from which to compute the hessian matrix.

sigma: float

Standard deviation used for the Gaussian kernel to smooth the array. Defaul is 1

scale: bool

whether the hessian elements will be scaled by sigma squared. Default is True

whiteonblack: boolean

image is white objects on black blackground or not. Default is True

hessian array of shape (…, ndim, ndim)

aicssegmentation.core.output_utils module

aicssegmentation.core.output_utils.generate_segmentation_contour(im)[source]

generate the contour of the segmentation

aicssegmentation.core.output_utils.output_hook(im, names, out_flag, output_path, fn)[source]

general hook for cutomized output

aicssegmentation.core.output_utils.save_segmentation(bw: numpy.ndarray, contour_flag: bool, output_path: pathlib.Path, fn: str, suffix: str = '_struct_segmentation')[source]

save the segmentation into a tiff file

bw: np.ndarray

the segmentation to save

contour_flag: book

whether to also save segmentation contour

output_path: Path

the path to save

fn: str

the core file name to use, for example, “img_102”, then after a suffix (say “_seg”) is added, the file name of the output is “img_101_seg.tiff”

suffix: str

the suffix to add to the output filename

aicssegmentation.core.pre_processing_utils module

aicssegmentation.core.pre_processing_utils.edge_preserving_smoothing_3d(struct_img: numpy.ndarray, numberOfIterations: int = 10, conductance: float = 1.2, timeStep: float = 0.0625, spacing: List = [1, 1, 1])[source]

perform edge preserving smoothing on a 3D image

struct_img: np.ndarray

the image to be smoothed

numberOfInterations: int

how many smoothing iterations to perform. More iterations give more smoothing effect. Default is 10.

timeStep: float

the time step to be used for each iteration, important for numberical stability. Default is 0.0625 for 3D images. Do not suggest to change.

spacing: List

the spacing of voxels in three dimensions. Default is [1, 1, 1]

https://itk.org/Doxygen/html/classitk_1_1GradientAnisotropicDiffusionImageFilter.html

aicssegmentation.core.pre_processing_utils.image_smoothing_gaussian_3d(struct_img, sigma, truncate_range=3.0)[source]

wrapper for 3D Guassian smoothing

aicssegmentation.core.pre_processing_utils.image_smoothing_gaussian_slice_by_slice(struct_img, sigma, truncate_range=3.0)[source]

wrapper for applying 2D Guassian smoothing slice by slice on a 3D image

aicssegmentation.core.pre_processing_utils.intensity_normalization(struct_img: numpy.ndarray, scaling_param: List)[source]

Normalize the intensity of input image so that the value range is from 0 to 1.

img: np.ndarray

a 3d image

scaling_param: List
a list with only one value 0, i.e. [0]: Min-Max normlaizaiton,

the max intensity of img will be mapped to 1 and min will be mapped to 0

a list with a single positive integer v, e.g. [5000]: Min-Max normalization,

but first any original intensity value > v will be considered as outlier and reset of min intensity of img. After the max will be mapped to 1 and min will be mapped to 0

a list with two float values [a, b], e.g. [1.5, 10.5]: Auto-contrast

normalizaiton. First, mean and standard deviaion (std) of the original intensity in img are calculated. Next, the intensity is truncated into range [mean - a * std, mean + b * std], and then recaled to [0, 1]

a list with four float values [a, b, c, d], e.g. [0.5, 15.5, 200, 4000]:

Auto-contrast normalization. Similat to above case, but only intensity value between c and d will be used to calculated mean and std.

aicssegmentation.core.pre_processing_utils.suggest_normalization_param(structure_img0)[source]

suggest scaling parameter assuming the image is a representative example of this cell structure

aicssegmentation.core.seg_dot module

aicssegmentation.core.seg_dot.dot_2d(struct_img, log_sigma, cutoff=-1)[source]

apply 2D spot filter on a 2D image

struct_img: np.ndarray

the 2D image to segment

log_sigma: float

the size of the filter, which can be set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set this as 1 or something near 1 (like 1.25).

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

aicssegmentation.core.seg_dot.dot_2d_slice_by_slice_wrapper(struct_img: numpy.ndarray, s2_param: List)[source]

wrapper for 2D spot filter on 3D image slice by slice

struct_img: np.ndarray

a 3d numpy array, usually the image after smoothing

s2_param: List

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g. [[1, 0.1]] or [[1, 0.12], [3,0.1]]: scale_x is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have dots of very different sizes. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more dots and fatter segmentation, while larger cutoff_x could be less permisive and yield less dots and slimmer segmentation.

aicssegmentation.core.seg_dot.dot_3d(struct_img: numpy.ndarray, log_sigma: float, cutoff=-1)[source]

apply 3D spot filter on a 3D image

struct_img: np.ndarray

the 3D image to segment

log_sigma: float

the size of the filter, which can be set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set this as 1 or something near 1 (like 1.25).

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

aicssegmentation.core.seg_dot.dot_3d_wrapper(struct_img: numpy.ndarray, s3_param: List)[source]

wrapper for 3D spot filter

struct_img: np.ndarray

a 3d numpy array, usually the image after smoothing

s3_param: List

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g. [[1, 0.1]] or [[1,0.12], [3,0.1]]. scale_x is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is about 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have dots of very different sizes. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more dots and “fatter” segmentation, while larger cutoff_x could be less permisive and yield less dots and slimmer segmentation.

aicssegmentation.core.seg_dot.dot_slice_by_slice(struct_img: numpy.ndarray, log_sigma: float, cutoff=-1)[source]

apply 2D spot filter on 3D image slice by slice

struct_img: np.ndarray

a 3d numpy array, usually the image after smoothing

log_sigma: float

the size of the filter, which can be set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set this as 1 or something near 1 (like 1.25).

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

aicssegmentation.core.seg_dot.logSlice(image: numpy.ndarray, sigma_list: List, threshold: float)[source]

apply multi-scale 2D spot filter on a 2D image and binarize with threshold

image: np.ndarray

the 2D image to segment

sigma_list: List

The list of sigma representing filters in multiple scales

threshold: float

the cutoff to apply to get the binary output

aicssegmentation.core.utils module

aicssegmentation.core.utils.absolute_eigenvaluesh(nd_array)[source]

Computes the eigenvalues sorted by absolute value from the symmetrical matrix.

nd_array: nd.ndarray

array from which the eigenvalues will be calculated.

A list with the eigenvalues sorted in absolute ascending order (e.g. [eigenvalue1, eigenvalue2, …])

aicssegmentation.core.utils.cell_local_adaptive_threshold(structure_img_smooth: numpy.ndarray, cell_wise_min_area: int)[source]
aicssegmentation.core.utils.divide_nonzero(array1, array2)[source]

Divides two arrays. Returns zero when dividing by zero.

aicssegmentation.core.utils.get_3dseed_from_mid_frame(bw: numpy.ndarray, stack_shape: List = None, mid_frame: int = -1, hole_min: int = 1, bg_seed: bool = True) → numpy.ndarray[source]

build a 3D seed image from the binary segmentation of a single slice

bw: np.ndarray

the 2d segmentation of a single frame, or a 3D array with only one slice containing segmentation

stack_shape: List

(only used when bw is 2d) the shape of original 3d image, e.g. shape_3d = img.shape

frame_index: int

(only used when bw is 2d) the index of where bw is from the whole z-stack

hole_min: int

any connected component in bw2d with size smaller than area_min will be excluded from seed image generation

bg_seed: bool

bg_seed=True will add a background seed at the first frame (z=0).

aicssegmentation.core.utils.get_middle_frame(struct_img: numpy.ndarray, method: str = 'z') → int[source]

find the middle z frame of an image stack

struct_img: np.ndarray

the 3D image to process

method: str

which method to use to determine the middle frame. Options are “z” or “intensity”. “z” is solely based on the number of z frames. “intensity” method uses Otsu threshod to estimate the volume of foreground signals in the stack, then estimated volume of each z frame forms a z-profile, and finally another Otsu method is apply on the z profile to find the best z frame (with an assumption of two peaks along z profile, one near the bottom of the cells and one near the bottom of the cells, so the optimal separation is the middle of the stack).

mid_frame: int

the z index of the middle z frame

aicssegmentation.core.utils.get_seed_for_objects(raw: numpy.ndarray, bw: numpy.ndarray, area_min: int = 1, area_max: int = 10000, bg_seed: bool = True) → numpy.ndarray[source]

build a seed image for an image of 3D objects (assuming roughly convex shape in 3D) using the information in the middle slice

raw: np.ndarray

orignal image used to determine middle slice

bw: np.ndarray

a round 3D segmentation, expecting the segmentation in the middle slice having relatively good quality

area_min: int

estimated minimal size on one single slice (major body chunk, e.g. the center XY plane of a 3D ball) of an object

area_max: int

estimated maximal size on one single slice (major body chunk, e.g. the center XY plane of a 3D ball) of an object. It is recommended to be conservertive (setting this value a little larger)

bg_seed: bool

bg_seed=True will add a background seed at the first frame (z=0).

aicssegmentation.core.utils.histogram_otsu(hist)[source]

Apply Otsu thresholding method on 1D histogram

aicssegmentation.core.utils.hole_filling(bw: numpy.ndarray, hole_min: int, hole_max: int, fill_2d: bool = True) → numpy.ndarray[source]

Fill holes in 2D/3D segmentation

bw: np.ndarray

a binary 2D/3D image.

hole_min: int

the minimum size of the holes to be filled

hole_max: int

the maximum size of the holes to be filled

fill_2d: bool

if fill_2d=True, a 3D image will be filled slice by slice. If you think of a hollow tube alone z direction, the inside is not a hole under 3D topology, but the inside on each slice is indeed a hole under 2D topology.

Return:

a binary image after hole filling

aicssegmentation.core.utils.invert_mask(img)[source]
aicssegmentation.core.utils.mask_image(image, mask, value: int = 0)[source]
aicssegmentation.core.utils.peak_local_max_wrapper(struct_img_for_peak: numpy.ndarray, bw: numpy.ndarray) → numpy.ndarray[source]
aicssegmentation.core.utils.prune_z_slices(bw: numpy.ndarray)[source]

prune the segmentation by only keep a certain range of z-slices with the assumption of all signals living only in a few consecutive z-slices. This function will first determine the key z-slice where most of the signals living on and then include a few slices up/down along z to make the segmentation completed. This is useful when you have prior knowledge about your segmentation target and can effectively exclude small segmented objects due to noise/artifacts in those z-slices we are sure the signal should not live on.

bw: np.ndarray

the segmentation before pruning

aicssegmentation.core.utils.remove_hot_pixel(seg: numpy.ndarray) → numpy.ndarray[source]

remove hot pixel from segmentation

aicssegmentation.core.utils.remove_index_object(label: numpy.ndarray, id_to_remove: List[int] = [1], in_place: bool = False) → numpy.ndarray[source]
aicssegmentation.core.utils.segmentation_intersection(seg: List) → numpy.ndarray[source]

get the intersection of multiple segmentations into a single result

Parameters

seg (List) – a list of segmentations, should all have the same shape

aicssegmentation.core.utils.segmentation_union(seg: List) → numpy.ndarray[source]

merge multiple segmentations into a single result

Parameters

seg (List) – a list of segmentations, should all have the same shape

aicssegmentation.core.utils.segmentation_xor(seg: List) → numpy.ndarray[source]

get the XOR of multiple segmentations into a single result

Parameters

seg (List) – a list of segmentations, should all have the same shape

aicssegmentation.core.utils.size_filter(img: numpy.ndarray, min_size: int, method: str = '3D', connectivity: int = 1)[source]

size filter

img: np.ndarray

the image to filter on

min_size: int

the minimum size to keep

method: str

either “3D” or “slice_by_slice”, default is “3D”

connnectivity: int

the connectivity to use when computing object size

aicssegmentation.core.utils.sortbyabs(a: numpy.ndarray, axis=0)[source]

Sort array along a given axis by the absolute value modified from: http://stackoverflow.com/a/11253931/4067734

aicssegmentation.core.utils.topology_preserving_thinning(bw: numpy.ndarray, min_thickness: int = 1, thin: int = 1) → numpy.ndarray[source]

perform thinning on segmentation without breaking topology

bw: np.ndarray

the 3D binary image to be thinned

min_thickness: int

Half of the minimum width you want to keep from being thinned. For example, when the object width is smaller than 4, you don’t want to make this part even thinner (may break the thin object and alter the topology), you can set this value as 2.

thin: int
the amount to thin (has to be an positive integer). The number of

pixels to be removed from outter boundary towards center.

A binary image after thinning

aicssegmentation.core.utils.watershed_wrapper(bw: numpy.ndarray, local_maxi: numpy.ndarray) → numpy.ndarray[source]

aicssegmentation.core.vessel module

aicssegmentation.core.vessel.compute_vesselness2D(eigen2, tau)[source]

backend for computing 2D filament filter

aicssegmentation.core.vessel.compute_vesselness3D(eigen2, eigen3, tau)[source]

backend for computing 3D filament filter

aicssegmentation.core.vessel.filament_2d_wrapper(struct_img: numpy.ndarray, f2_param: List[List])[source]

wrapper for 2d filament filter

struct_img: np.ndarray

the image (should have been smoothed) to be segmented. The image is either 2D or 3D. If 3D, the filter is applied in a slice by slice fashion

f2_param: List[List]

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g., [[1, 0.01]] or [[1,0.05], [0.5, 0.1]]. Here, scale_x is set based on the estimated thickness of your target filaments. For example, if visually the thickness of the filaments is usually 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have filaments of very different thickness. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more filaments, especially detecting more dim ones and thicker segmentation, while larger cutoff_x could be less permisive and yield less filaments and slimmer segmentation.

T. Jerman, et al. Enhancement of vascular structures in 3D and 2D angiographic images. IEEE transactions on medical imaging. 2016 Apr 4;35(9):2107-18.

aicssegmentation.core.vessel.filament_3d_wrapper(struct_img: numpy.ndarray, f3_param: List[List])[source]

wrapper for 3d filament filter

struct_img: np.ndarray

the image (should have been smoothed) to be segmented. The image has to be 3D.

f3_param: List[List]

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g., [[1, 0.01]] or [[1,0.05], [0.5, 0.1]]. scale_x is set based on the estimated thickness of your target filaments. For example, if visually the thickness of the filaments is usually 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have filaments of very different thickness. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more filaments, especially detecting more dim ones and thicker segmentation, while larger cutoff_x could be less permisive and yield less filaments and slimmer segmentation.

T. Jerman, et al. Enhancement of vascular structures in 3D and 2D angiographic images. IEEE transactions on medical imaging. 2016 Apr 4;35(9):2107-18.

aicssegmentation.core.vessel.vesselness2D(nd_array: numpy.ndarray, sigmas: List, tau: float = 1, whiteonblack: bool = True, cutoff: float = -1)[source]

Multi-scale 2D filament filter

nd_array: np.ndarray

the 2D image to be filterd on

sigmas: List

a list of scales to use

tau: float

parameter that controls response uniformity. The value has to be between 0.5 and 1. Lower tau means more intense output response. Default is 0.5

whiteonblack: bool

whether the filamentous structures are bright on dark background or dark on bright. Default is True.

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

T. Jerman, et al. Enhancement of vascular structures in 3D and 2D angiographic images. IEEE transactions on medical imaging. 2016 Apr 4;35(9):2107-18.

aicssegmentation.core.vessel.vesselness2D_single_slice(nd_array: numpy.ndarray, single_z: int, sigmas: List, tau: float = 1, whiteonblack: bool = True, cutoff: float = -1)[source]

Multi-scale 2D filament filter

nd_array: np.ndarray

the 3D image to be filterd on

single_z: int

the index of the slice to apply the filter

sigmas: List

a list of scales to use

tau: float

parameter that controls response uniformity. The value has to be between 0.5 and 1. Lower tau means more intense output response. Default is 0.5

whiteonblack: bool

whether the filamentous structures are bright on dark background or dark on bright. Default is True.

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

T. Jerman, et al. Enhancement of vascular structures in 3D and 2D angiographic images. IEEE transactions on medical imaging. 2016 Apr 4;35(9):2107-18.

aicssegmentation.core.vessel.vesselness3D(nd_array: numpy.ndarray, sigmas: List, tau=1, whiteonblack=True, cutoff: float = -1)[source]

Multi-scale 3D filament filter

nd_array: np.ndarray

the 3D image to be filterd on

sigmas: List

a list of scales to use

tau: float

parameter that controls response uniformity. The value has to be between 0.5 and 1. Lower tau means more intense output response. Default is 1

whiteonblack: bool

whether the filamentous structures are bright on dark background or dark on bright. Default is True.

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

T. Jerman, et al. Enhancement of vascular structures in 3D and 2D angiographic images. IEEE transactions on medical imaging. 2016 Apr 4;35(9):2107-18.

aicssegmentation.core.vessel.vesselnessSliceBySlice(nd_array: numpy.ndarray, sigmas: List, tau: float = 1, whiteonblack: bool = True, cutoff: float = -1)[source]

wrapper for applying multi-scale 2D filament filter on 3D images in a slice by slice fashion

nd_array: np.ndarray

the 3D image to be filterd on

sigmas: List

a list of scales to use

tau: float

parameter that controls response uniformity. The value has to be between 0.5 and 1. Lower tau means more intense output response. Default is 0.5

whiteonblack: bool

whether the filamentous structures are bright on dark background or dark on bright. Default is True.

cutoff: float

the cutoff value to apply on the filter result. If the cutoff is negative, no cutoff will be applied. Default is -1

aicssegmentation.core.visual module

aicssegmentation.core.visual.blob2dExplorer_single(im, sigma, th)[source]

backend for trying 2D spot filter on a single Z slice

Parameters
  • im (np.ndarray) – 2D image array

  • sigma (float) – sigma in the spot filter

  • th (float) – threshold to be applied as cutoff on filter output

>>> from ipywidgets import interact, fixed
>>> import ipywidgets as widgets
>>> # define slide bars for trying different parameters
>>> interact(
>>>     blob2dExplorer_single,
>>>     im = fixed(img),
>>>     sigma = widgets.FloatRangeSlider(
>>>         value = (1, 5),
>>>         min = 1,
>>>         max = 15,
>>>         step = 1,
>>>         continuous_update = False
>>>     ),
>>>     th = widgets.FloatSlider(
>>>         value = 0.02,
>>>         min = 0.01,
>>>         max = 0.1,
>>>         step = 0.01,
>>>         continuous_update = False
>>>     )
>>> );
aicssegmentation.core.visual.fila2dExplorer_single(im, sigma, th)[source]

backend for trying 2D filament filter on a single Z slice

Parameters
  • im (np.ndarray) – 2D image array

  • sigma (float) – sigma in the filament filter

  • th (float) – threshold to be applied as cutoff on filter output

>>> from ipywidgets import interact, fixed
>>> import ipywidgets as widgets
>>> # define slide bars for trying different parameters
>>> interact(
>>>     fila2dExplorer_single,
>>>     im = fixed(img),
>>>     sigma = widgets.FloatRangeSlider(
>>>         value = 3,
>>>         min = 1,
>>>         max = 11,
>>>         step = 1,
>>>         continuous_update = False
>>>     ),
>>>     th = widgets.FloatSlider(
>>>         value = 0.05,
>>>         min = 0.01,
>>>         max = 0.5,
>>>         step = 0.01,
>>>         continuous_update = False
>>>     )
>>> );
aicssegmentation.core.visual.img_seg_combine(img, seg, roi=['Full', None])[source]

creating raw and segmentation side-by-side for visualizaiton

aicssegmentation.core.visual.mipView(im)[source]

simple wrapper to view maximum intensity projection

aicssegmentation.core.visual.random_colormap(nn: int = 10000)[source]

generate a random colormap with nn different colors

nn: int

the number of random colors needed

>>> import matplotlib.pyplot as plt
>>> # img_label is output of a label function and represent all connected components
>>> plt.imshow(img_label, cmap=random_colormap())
aicssegmentation.core.visual.seg_fluo_side_by_side(im, seg, roi=['Full', None])[source]

wrapper for displaying raw and segmentation side by side

aicssegmentation.core.visual.segmentation_quick_view(seg: numpy.ndarray)[source]

wrapper for visualizing segmentation in ITK viewer

seg: np.ndarray

3D stack of segmentation to view

>>> from itkwidgets import view
>>> view(segmentation_quick_view(seg))
aicssegmentation.core.visual.single_fluorescent_view(im)[source]

wrapper for visualizing an image stack in ITK viewer

im: np.ndarray

3D image stack to view

>>> from itkwidgets import view
>>> view(single_fluorescent_view(im))
aicssegmentation.core.visual.sliceViewer(im: numpy.ndarray, zz: int)[source]

simple wrapper to view one slice of a z-stack

im: np.ndarray

3D image stack to view

zz: int

the slice to return

>>> from ipywidgets import interact, fixed
>>> import ipywidgets as widgets
>>> interact(
>>>     sliceViewer,
>>>     im = fixed(struct_img),
>>>     zz = widgets.IntSlider(
>>>         min = 0,
>>>         max = struct_img.shape[0] - 1,
>>>         step = 1,
>>>         value = struct_img.shape[0] // 2,
>>>         continuous_update = False
>>>     )
>>> );

Module contents