camera_alignment_core.alignment_utils package#

Submodules#

camera_alignment_core.alignment_utils.alignment_info module#

class camera_alignment_core.alignment_utils.alignment_info.AlignmentInfo(rotation: int, shift_x: int, shift_y: int, z_offset: int, scaling: float)#

Bases: object

These are metrics captured/measured as part of generating the alignment matrix.

rotation: int#
scaling: float#
shift_x: int#
shift_y: int#
z_offset: int#

camera_alignment_core.alignment_utils.alignment_qc module#

class camera_alignment_core.alignment_utils.alignment_qc.AlignmentQC(reference: Optional[ndarray[Any, dtype[uint16]]] = None, moving: Optional[ndarray[Any, dtype[uint16]]] = None, reference_seg: Optional[ndarray[Any, dtype[uint16]]] = None, moving_seg: Optional[ndarray[Any, dtype[uint16]]] = None, ref_mov_coor_dict: Optional[Dict[Tuple[int, int], Tuple[int, int]]] = None, rev_coor_dict: Optional[Dict[Tuple[int, int], Tuple[int, int]]] = None, tform: Optional[SimilarityTransform] = None)#

Bases: object

apply_transform()#
check_all_defined() Optional[list[str]]#
check_z_offset_between_ref_mov() Tuple[int, int, int]#
report_change_fov_intensity_parameters() Dict[str, int]#

Reports changes in FOV intensity after transform :return: A dictionary with the following keys and values:

median_intensity min_intensity max_intensity 1_percentile: first percentile intensity 995th_percentile: 99.5th percentile intensity

report_changes_in_coordinates_mapping() Tuple[bool, float]#

Report changes in beads (center of FOV) centroid coordinates before and after transform. A good transform will reduce the difference in distances, or at least not increase too much (thresh=5), between transformed_mov_beads and ref_beads than mov_beads and ref_beads. A bad transform will increase the difference in distances between transformed_mov_beads and ref_beads.

report_changes_in_mse() Tuple[bool, float]#

Report changes in normalized root mean-squared-error value before and after transform, post-segmentation. :return:

qc: A boolean to indicate if it passed (True) or failed (False) qc diff_mse: Difference in mean squared error

report_full_metrics() Dict[str, Optional[object]]#
report_number_beads() Tuple[bool, int]#
report_ref_mov_image_snr() Tuple[int, int, int, int]#
set_raw_images(reference: ndarray[Any, dtype[uint16]], moving: ndarray[Any, dtype[uint16]])#
set_ref_mov_coor_dict(ref_mov_coor_dict: Dict[Tuple[int, int], Tuple[int, int]])#
set_rev_coor_dict(rev_coor_dict: Dict[Tuple[int, int], Tuple[int, int]])#
set_seg_images(reference_seg: ndarray[Any, dtype[uint16]], moving_seg: ndarray[Any, dtype[uint16]])#
set_tform(tform: SimilarityTransform)#

camera_alignment_core.alignment_utils.crop_rings module#

class camera_alignment_core.alignment_utils.crop_rings.CropRings(img: ndarray[Any, dtype[uint16]], pixel_size: float, magnification: int, filter_px_size: int = 50, bead_distance: int = 15)#

Bases: object

static get_crop_dimensions(img_height: int, img_width: int, cross_y: int, cross_x: int, bead_dist_px: float, crop_param: float = 0.5) Tuple[int, int, int, int]#

Calculates the crop dimension from the location of the cross to capture complete rings in the image :param img: :type img: mxn nd-array image of rings :param cross_y: :type cross_y: y location of center cross :param cross_x: :type cross_x: x location of center cross :param bead_dist_px: :type bead_dist_px: distance between rings in pixels :param crop_param: after cropping :type crop_param: a float between 0 and 1 that indicates a factor of distance between rings that should be left behind

Returns:

  • crop_top (top pixels to keep)

  • crop_bottom (bottom pixels to keep)

  • crop_left (left pixels to keep)

  • crop_right (right pixels to keep)

run(min_no_crop_magnification: int = 63, segmentation_mult_factor: float = 2.5) Tuple[ndarray[Any, dtype[uint16]], Tuple[int, int, int, int]]#

Crop image closer to ring grid :min_no_crop_magnification: int

Minimum magnification at which we do not need to crop the bead image (e.g., because it’s zoomed enough)

Segmentation_mult_factor:

float Value passed directly to SegmentRings::segment_cross as input_mult_factor

camera_alignment_core.alignment_utils.get_center_z module#

camera_alignment_core.alignment_utils.get_center_z.get_center_z(img_stack: ndarray[Any, dtype[uint16]], thresh=(0.2, 99.8)) int#

Get index of center z slice by finding the slice with max. contrast value :param stack a 3D (or 2D) image:

Return type:

center_z index of center z-slice

camera_alignment_core.alignment_utils.ring_alignment module#

class camera_alignment_core.alignment_utils.ring_alignment.RingAlignment(ref_rings_props: DataFrame, ref_cross_label: int, mov_rings_props: DataFrame, mov_cross_label: int)#

Bases: object

assign_ref_to_mov(updated_ref_peak_dict: Dict[int, Tuple[int, int]], updated_mov_peak_dict: Dict[int, Tuple[int, int]]) Dict[Tuple[int, int], Tuple[int, int]]#

Assigns beads from moving image to reference image using linear_sum_assignment to reduce the distance between the same bead on the separate channels. In case where there is more beads in one channel than the other, this method will throw off the extra bead that cannot be assigned to one, single bead on the other image.

Parameters:
  • updated_ref_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from reference beads

  • updated_mov_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from moving beads

Returns:

ref_mov_coor_dict: A dictionary mapping the reference bead coordinates

and moving bead coordinates

calc_cross_offset()#

Estimate image offset by calculating the distance between the centroids of the cross in the reference and moving images.

change_coor_system(coor_dict: Dict[Tuple[int, int], Tuple[int, int]]) Dict[Tuple[int, int], Tuple[int, int]]#

Changes coordinates in a dictionary from {(y1, x1):(y2, x2)} to {(x1, y1): (x2, y2)} :param coor_dict: A dictionary of coordinates in the form of {(y1, x1):(y2, x2)} :return:

An updated reversed coordinate dictionary that is {(x1, y1): (x2, y2)}

pos_bead_matches(ref_peak_dict: Dict[int, Tuple[int, int]], mov_peak_dict: Dict[int, Tuple[int, int]]) Tuple[Dict[int, List[int]], Dict[int, List[float]], float]#

Constrain ring matching problem by identifying which rings in the moving image are closer to a given reference image ring than the distance between rows/columns of rings. The threshold distance is found by computing the median distance between rings and their 4 nearest neighbors in the reference image.

Parameters:
  • ref_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from reference beads

  • mov_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from moving beads

Returns:

match_dict: A dictionary mapping the reference bead numbers

and moving bead numbers within the threshold distance

cost_dict: A dictionary mapping distance between a

reference bead numbers and matching moving beads

thresh_dist: The threshold distance calculated

rings_coor_dict(props: DataFrame, cross_label: int) Dict[int, Tuple[int, int]]#

Generate a dictionary from regionprops_table in the form of {label: (coor_y, coor_x)} for rings image :param props: a dataframe containing regionprops_table output :param cross_label: Integer value representing where the center cross is in the rings image :return:

img_dict: A dictionary of label to coordinates

run() Tuple[SimilarityTransform, AlignmentInfo]#

camera_alignment_core.alignment_utils.segment_rings module#

class camera_alignment_core.alignment_utils.segment_rings.SegmentRings(img: ndarray[Any, dtype[uint16]], pixel_size: float, magnification: int, thresh: Optional[Tuple[float, float]] = None, bead_distance_um: float = 15, cross_size_um: float = 4.4999999999999996e-05, ring_radius_um: float = 7e-07)#

Bases: object

dot_2d_slice_by_slice_wrapper(struct_img: ndarray[Any, dtype[float32]], s2_param: List) ndarray[Any, dtype[bool_]]#

https://github.com/AllenCell/aics-segmentation/blob/main/aicssegmentation/core/seg_dot.py wrapper for 2D spot filter on 3D image slice by slice Parameters: ———— struct_img: np.typing.NDArray

a 3d numpy array, usually the image after smoothing

s2_param: List

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g. [[1, 0.1]] or [[1, 0.12], [3,0.1]]: scale_x is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have dots of very different sizes. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more dots and fatter segmentation, while larger cutoff_x could be less permisive and yield less dots and slimmer segmentation.

filter_center_cross(label_seg: ndarray[Any, dtype[uint16]]) Tuple[ndarray[Any, dtype[uint16]], DataFrame, int]#

filters out where the center cross (the biggest segmented object) is in a labelled rings image

Parameters:

label_seg (A labelled image) –

Returns:

  • filter_label (A labelled image after filtering the center cross (center cross = 0))

  • props_df (A dataframe from regionprops_table with columns [‘label’, ‘centroid-0’, ‘centroid-y’, ‘area’])

  • cross_label (The integer label of center cross)

get_number_rings(img: ndarray[Any, dtype[uint16]], mult_factor: int = 5) int#

Estimates the number of rings in a rings object using the location of the center cross :param img: :type img: input image (after smoothing) :param bead_dist_px: :type bead_dist_px: calculated distance between rings in pixels :param mult_factor: :type mult_factor: parameter to segment cross with

Returns:

num_beads

Return type:

number of beads after estimation

preprocess_img() ndarray[Any, dtype[uint16]]#

Pre-process image with raw-intensity with rescaling and smoothing using pre-defined parameters from image magnification information :returns: smooth :rtype: smooth image

run() Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]], DataFrame, int]#
segment_cross(img: ndarray[Any, dtype[uint16]], mult_factor_range: Tuple[int, int] = (1, 5), input_mult_factor: Optional[float] = 0.0) Tuple[ndarray[Any, dtype[bool_]], DataFrame]#

Segments the center cross in the image through iterating the intensity-threshold parameter until one object greater than the expected cross size (in pixel) is segmented :param img: :type img: image (intensity after smoothing) :param mult_factor_range: :type mult_factor_range: range of multiplication factor to determine threshold for segmentation

Returns:

  • seg_cross (binary image of segmented cross)

  • props (dataframe describing the centroid location and size of the segmented cross)

segment_rings_dot_filter(img_2d: ndarray[Any, dtype[uint16]], seg_cross: ndarray[Any, dtype[bool_]], num_beads: int, minArea: int, search_range: Tuple[float, float] = (0, 0.75), size_param: float = 2.5) Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]], Optional[float]]#

Segments rings using 2D dot filter from aics-segmenter. The method loops through a possible range of parameters and automatically detects the optimal filter parameter when it segments the number of expected rings objects :param img: :type img: rings image (after smoothing) :param seg_cross: :type seg_cross: binary mask of the center cross object :param num_beads: :type num_beads: expected number of beads :param minArea: :type minArea: minimum area of rings, any segmented object below this size will be filtered out :param search_range: :type search_range: initial search range of filter parameter :param size_param: :type size_param: size parameter of dot filter

Returns:

  • seg (binary mask of ring segmentation)

  • label (labelled mask of ring segmentation)

  • thresh (filter parameter after optimization)

segment_rings_intensity_threshold(img: ndarray[Any, dtype[uint16]], filter_px_size=50, mult_factor=2.5) Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]]]#

Segments rings using intensity-thresholded method :param img: :type img: rings image (after smoothing) :param filter_px_size: :type filter_px_size: any segmented below this size will be filtered out :param mult_factor: :type mult_factor: parameter to adjust threshold :param show_seg: :type show_seg: boolean to display segmentation

Returns:

  • filtered_seg (binary mask of ring segmentation)

  • filtered_label (labelled mask of ring segmentation)

Module contents#

class camera_alignment_core.alignment_utils.AlignmentInfo(rotation: int, shift_x: int, shift_y: int, z_offset: int, scaling: float)#

Bases: object

These are metrics captured/measured as part of generating the alignment matrix.

rotation: int#
scaling: float#
shift_x: int#
shift_y: int#
z_offset: int#
class camera_alignment_core.alignment_utils.CropRings(img: ndarray[Any, dtype[uint16]], pixel_size: float, magnification: int, filter_px_size: int = 50, bead_distance: int = 15)#

Bases: object

static get_crop_dimensions(img_height: int, img_width: int, cross_y: int, cross_x: int, bead_dist_px: float, crop_param: float = 0.5) Tuple[int, int, int, int]#

Calculates the crop dimension from the location of the cross to capture complete rings in the image :param img: :type img: mxn nd-array image of rings :param cross_y: :type cross_y: y location of center cross :param cross_x: :type cross_x: x location of center cross :param bead_dist_px: :type bead_dist_px: distance between rings in pixels :param crop_param: after cropping :type crop_param: a float between 0 and 1 that indicates a factor of distance between rings that should be left behind

Returns:

  • crop_top (top pixels to keep)

  • crop_bottom (bottom pixels to keep)

  • crop_left (left pixels to keep)

  • crop_right (right pixels to keep)

run(min_no_crop_magnification: int = 63, segmentation_mult_factor: float = 2.5) Tuple[ndarray[Any, dtype[uint16]], Tuple[int, int, int, int]]#

Crop image closer to ring grid :min_no_crop_magnification: int

Minimum magnification at which we do not need to crop the bead image (e.g., because it’s zoomed enough)

Segmentation_mult_factor:

float Value passed directly to SegmentRings::segment_cross as input_mult_factor

class camera_alignment_core.alignment_utils.RingAlignment(ref_rings_props: DataFrame, ref_cross_label: int, mov_rings_props: DataFrame, mov_cross_label: int)#

Bases: object

assign_ref_to_mov(updated_ref_peak_dict: Dict[int, Tuple[int, int]], updated_mov_peak_dict: Dict[int, Tuple[int, int]]) Dict[Tuple[int, int], Tuple[int, int]]#

Assigns beads from moving image to reference image using linear_sum_assignment to reduce the distance between the same bead on the separate channels. In case where there is more beads in one channel than the other, this method will throw off the extra bead that cannot be assigned to one, single bead on the other image.

Parameters:
  • updated_ref_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from reference beads

  • updated_mov_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from moving beads

Returns:

ref_mov_coor_dict: A dictionary mapping the reference bead coordinates

and moving bead coordinates

calc_cross_offset()#

Estimate image offset by calculating the distance between the centroids of the cross in the reference and moving images.

change_coor_system(coor_dict: Dict[Tuple[int, int], Tuple[int, int]]) Dict[Tuple[int, int], Tuple[int, int]]#

Changes coordinates in a dictionary from {(y1, x1):(y2, x2)} to {(x1, y1): (x2, y2)} :param coor_dict: A dictionary of coordinates in the form of {(y1, x1):(y2, x2)} :return:

An updated reversed coordinate dictionary that is {(x1, y1): (x2, y2)}

pos_bead_matches(ref_peak_dict: Dict[int, Tuple[int, int]], mov_peak_dict: Dict[int, Tuple[int, int]]) Tuple[Dict[int, List[int]], Dict[int, List[float]], float]#

Constrain ring matching problem by identifying which rings in the moving image are closer to a given reference image ring than the distance between rows/columns of rings. The threshold distance is found by computing the median distance between rings and their 4 nearest neighbors in the reference image.

Parameters:
  • ref_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from reference beads

  • mov_peak_dict – A dictionary ({bead_number: (coor_y, coor_x)}) from moving beads

Returns:

match_dict: A dictionary mapping the reference bead numbers

and moving bead numbers within the threshold distance

cost_dict: A dictionary mapping distance between a

reference bead numbers and matching moving beads

thresh_dist: The threshold distance calculated

rings_coor_dict(props: DataFrame, cross_label: int) Dict[int, Tuple[int, int]]#

Generate a dictionary from regionprops_table in the form of {label: (coor_y, coor_x)} for rings image :param props: a dataframe containing regionprops_table output :param cross_label: Integer value representing where the center cross is in the rings image :return:

img_dict: A dictionary of label to coordinates

run() Tuple[SimilarityTransform, AlignmentInfo]#
class camera_alignment_core.alignment_utils.SegmentRings(img: ndarray[Any, dtype[uint16]], pixel_size: float, magnification: int, thresh: Optional[Tuple[float, float]] = None, bead_distance_um: float = 15, cross_size_um: float = 4.4999999999999996e-05, ring_radius_um: float = 7e-07)#

Bases: object

dot_2d_slice_by_slice_wrapper(struct_img: ndarray[Any, dtype[float32]], s2_param: List) ndarray[Any, dtype[bool_]]#

https://github.com/AllenCell/aics-segmentation/blob/main/aicssegmentation/core/seg_dot.py wrapper for 2D spot filter on 3D image slice by slice Parameters: ———— struct_img: np.typing.NDArray

a 3d numpy array, usually the image after smoothing

s2_param: List

[[scale_1, cutoff_1], [scale_2, cutoff_2], ….], e.g. [[1, 0.1]] or [[1, 0.12], [3,0.1]]: scale_x is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to set scale_x as 1 or something near 1 (like 1.25). Multiple scales can be used, if you have dots of very different sizes. cutoff_x is a threshold applied on the actual filter reponse to get the binary result. Smaller cutoff_x may yielf more dots and fatter segmentation, while larger cutoff_x could be less permisive and yield less dots and slimmer segmentation.

filter_center_cross(label_seg: ndarray[Any, dtype[uint16]]) Tuple[ndarray[Any, dtype[uint16]], DataFrame, int]#

filters out where the center cross (the biggest segmented object) is in a labelled rings image

Parameters:

label_seg (A labelled image) –

Returns:

  • filter_label (A labelled image after filtering the center cross (center cross = 0))

  • props_df (A dataframe from regionprops_table with columns [‘label’, ‘centroid-0’, ‘centroid-y’, ‘area’])

  • cross_label (The integer label of center cross)

get_number_rings(img: ndarray[Any, dtype[uint16]], mult_factor: int = 5) int#

Estimates the number of rings in a rings object using the location of the center cross :param img: :type img: input image (after smoothing) :param bead_dist_px: :type bead_dist_px: calculated distance between rings in pixels :param mult_factor: :type mult_factor: parameter to segment cross with

Returns:

num_beads

Return type:

number of beads after estimation

preprocess_img() ndarray[Any, dtype[uint16]]#

Pre-process image with raw-intensity with rescaling and smoothing using pre-defined parameters from image magnification information :returns: smooth :rtype: smooth image

run() Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]], DataFrame, int]#
segment_cross(img: ndarray[Any, dtype[uint16]], mult_factor_range: Tuple[int, int] = (1, 5), input_mult_factor: Optional[float] = 0.0) Tuple[ndarray[Any, dtype[bool_]], DataFrame]#

Segments the center cross in the image through iterating the intensity-threshold parameter until one object greater than the expected cross size (in pixel) is segmented :param img: :type img: image (intensity after smoothing) :param mult_factor_range: :type mult_factor_range: range of multiplication factor to determine threshold for segmentation

Returns:

  • seg_cross (binary image of segmented cross)

  • props (dataframe describing the centroid location and size of the segmented cross)

segment_rings_dot_filter(img_2d: ndarray[Any, dtype[uint16]], seg_cross: ndarray[Any, dtype[bool_]], num_beads: int, minArea: int, search_range: Tuple[float, float] = (0, 0.75), size_param: float = 2.5) Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]], Optional[float]]#

Segments rings using 2D dot filter from aics-segmenter. The method loops through a possible range of parameters and automatically detects the optimal filter parameter when it segments the number of expected rings objects :param img: :type img: rings image (after smoothing) :param seg_cross: :type seg_cross: binary mask of the center cross object :param num_beads: :type num_beads: expected number of beads :param minArea: :type minArea: minimum area of rings, any segmented object below this size will be filtered out :param search_range: :type search_range: initial search range of filter parameter :param size_param: :type size_param: size parameter of dot filter

Returns:

  • seg (binary mask of ring segmentation)

  • label (labelled mask of ring segmentation)

  • thresh (filter parameter after optimization)

segment_rings_intensity_threshold(img: ndarray[Any, dtype[uint16]], filter_px_size=50, mult_factor=2.5) Tuple[ndarray[Any, dtype[bool_]], ndarray[Any, dtype[uint16]]]#

Segments rings using intensity-thresholded method :param img: :type img: rings image (after smoothing) :param filter_px_size: :type filter_px_size: any segmented below this size will be filtered out :param mult_factor: :type mult_factor: parameter to adjust threshold :param show_seg: :type show_seg: boolean to display segmentation

Returns:

  • filtered_seg (binary mask of ring segmentation)

  • filtered_label (labelled mask of ring segmentation)

camera_alignment_core.alignment_utils.get_center_z(img_stack: ndarray[Any, dtype[uint16]], thresh=(0.2, 99.8)) int#

Get index of center z slice by finding the slice with max. contrast value :param stack a 3D (or 2D) image:

Return type:

center_z index of center z-slice