BioVoxxel 3D Box

Logo

The known BioVoxxel Toolbox functions now improved for 2D and 3D images

View the Project on GitHub biovoxxel/bv3dbox

BioVoxxel 3D Box (bv3dbox)

REMARK: Please inform me about any issues you encounter!

DOI GitHub GitHub release (latest by date including pre-releases) GitHub issues

Most of the known BioVoxxel Toolbox functions now for 2D and 3D images in one place. All functions are heavily based on GPU computing via the fabulous CLIJ2 library. Segmentation output is based stronger on labels (intensity coding of objects) instead of ROIs. Those labels can be equivalently used like ROIs with many CLIJ2 functions. Also label images created via other tools such as MorphoLibJ are suitable inputs for any plugin using labels.

image


Installation

The BioVoxxel 3D Box are distributed via the BioVoxxel 3D Box update site in Fiji.


Functionalities

Filtering

Flat Field Correction

The flat field correction allows to correct for uneven illumination including a dark-field (dark current) image subtraction. The dark-field image can also be ommited if unavailable. If the original image which sould be corrected for uneven illumination is a stack, flat-field as well as dark-field images can also be provided as single images. Those will be adapted to the original stack slice number. Otherwise, if they are not single slices, dark-field and flat-field need to have the same dimensions as the image to be corrected. Input images can have 8, 16, or 32-bit. RGB images are not yet supported (will be available in the future as well). The Output image is always 32-bit to account for correct float-point values after image division.

Formula:

\[result = { original - darkfield \over flatfield - darkfield } * { average\;of\;(flatfield - darkfield) }\]

image


Pseudo Flat Field Correction

The pseudo flat field correction takes a copy of the original image to be corrected and blurs it with a Gaussian Blur filter using the specified radius. If the image is a scaled 3D stack (with indicated units such as µm), the filter will also consider the correct x/y/z scaling ratio and perform the blurring accordingly in 3D space. This way the background created stays undistorted in relation to the original data. If the original image is a time series, slices should be considered independent the blurring can be forced to be done in 2D and correction will be applied slice by slice to the original. The background image can be displayed and for easier adaption of the radius value. The blurring radius should be high enough to eliminate all traces of any original object in the background image. Only different intensity shading must remain. If this is impossible to achieve, the method might not be suitable for that particular type of image. In the case of a 3D image all slices can be checked using the stack slice slider. The output will be 32-bit to account for accurate float-point pixel intensity values. Calculation is done according to the upper formula used for flat-field correction without a dark-field subtraction

image

Important: this is a non-quantitative optical correction. Intensity values will not be corrected according to any real uneven illumination and are therefore NOT suitable for intensity quantifications anymore! If intensity quantification is desired and uneven illumination needs to be corrected, the Flat Field Correction must be used.


Recursive Filter

A recursive filter repetitively applies the same filter on the previously filtered version of the underlying image. This keepsspecifically for the median filter shape alterations low, perfectly removes noise, homogenizes objects but still keeps border of also small objects better that a median filter with a bigger radius. It also performs efficiently due to the small filter size.

image


Segmentation

Threshold Check:

A helping tool to better identify suitable histogram-based automatic intensity thresholds, compare them qualitatively and quantitatively. This is based on the publication: Qualitative and Quantitative Evaluation of Two New Histogram Limiting Binarization Algorithms, Brocher J., IJIP (2014).

The Threshold Check allows to compare all implemented Auto Thresholds from ImageJ (by Gabriel Landini) and its counterparts from the CLIJ2 library (by Robert Haase). It uses false color to indicate the following:

The ground truth is in this case NOT the perfect, desired outcome but rather the next best estimation. It will also contain unspecific objects (or generally pixels) if the underlying image is not pre-processed by image filtering and/or background subtraction. It just serves as the quickest and direct way of comparing the extraction to an approximation of an acceptable outcome!

The following example image which the segmentation result according to Jaccard and Dice indices in Fiji’s main window are ok but indicate the slight over-segmentation (red pixels) with the chosen Threshold.

image

The Contrast saturation (%) slider serves to highlight brighter image content and “add” it to the ground truth. Objects of interest should in the best case therefore appear completely in yellow. If parts are highlighted in cyan they are not recognized by the current threshold even though being of interest, while if they show up in orange or red they are picked up by the threshold but are rather undesired.

The Histogram usage field allows to restrict the histogram during threshold calculation by ignoring black or white pixels or both. This can avoid that a big amount of saturated pixels contributes oversized to the final threshold. If “full” is chosen the original histogram is taken into account. The latter is the default setting.

In the ImageJ main window, the status bar shows the corresponding Jaccard Index and Dice Coefficient values for the current setup and comparison of approximated ground truth versus segmentation result are displayed. The closer to 1.0 they are the more more accurate the segmentation will be (in context of the approximated ground truth). The lower they are the lower the relative “segmentation quality”.

If the Contrast saturationvalue is kept fixed a more objective and quantitative comparison of the performance of individual Auto Thresholds can be achieved.

image

If the user finds a useful threshold the output can be set to either binary with the values 0/255 (ImageJ binary standard), binary 0/1 (CLIJ2 standard) or “Labels” which extracts the the objects filled with unique intensity values to be used as labeled connected components for further analysis (recommended setup).

image

Currently, stacks will automatically be considered as volume and thresholding is done on the stack histogram to achieve consistent results over the complete stack.

Slice-by-slice thresholding might come up in a future release.


Voronoi Threshold Labeler

The labeler is meant to be used as a image segmentation tool combining image pre-processing using a variety of convolition filters, background subtraction methods, auto thresholding and intensity maxima detection. The latter allows object separation similar to the a watershed algorithm, but will be only effective if Labels is chosen as output. Dependent on the combination of pre-processing. background subtraction, threshold and maxima detection quite variable objects can be extracted from an image.

image image

Parameter meaning and usage:

3D Example:

Original-small Extracted-small


Labels

Label Splitter

The label splitter is the equivalent of a watershedding function for binary images or images containing labeld objects already. It will separate objects according to the following methods. The output image will be displayed as consecutive intensity labels (intensity = identifier). This is the last part of the Voronoi Threshold Labeler processing. All of these functions work best on 3D isotropic voxels, so consider to run Make 3D image isotropic!

Methods:

3d_distance_map_animation 3d_maxima_sphere_animation

The erosion methods are useful for bigger and irregularly shaped objects, while the maxima method performs better for smaller objects. The erosion-based methods ignore the field Maxima detection radius. Too high spot sigmas will delete smaller objects from the image.

image

Further separation methods are planned to be added, so stay tuned!


Separate Labels

The label separator takes a label image and places a separation in form of background pixels between touching labels. This can be considered the equivalent of the standard binary watershed function in ImageJ / Fiji. This can influence further post processing such as erosion or opening functions from the Post Processor.

image


Post Processor

This tool is meant to be used on binary images or labels but can be used for most functions also as a normal image filter tool. This way, is partially the counter part of the Filter Check

image

Ongoing development: more filter functions will be added in future


Analysis

Object Inspector

The Object Inspector is the new version of the Speckle Inspector. It analyzes (secondary) objects inside (primary) objects. Input parameters are:

Results tables are available for primary as well as secondary objects including object counting and relational identification, size, intensity and some shape values.

image


Overlap Extractor

This tool is the new version of the Binary Feature Extractor. It keeps objects from one image which overlap with objects from a second image by a specified area (2D) or volume (3D) range. All primary objects which are covered less or more than the specified range values will be excluded from the analysis. The remaining ones will be extracted in a separate image. Original primary objects can also be displayed with the actual volume coverage. Original statistics for all objects are displayed in one table if desired while extraction statistics are displayed in a seperate table (OE3D_Statistics)

image


3D Neighbor Analysis

The neighbor analysis allows to analyze how many neighbor objects a specific labeled object has (intensity values in objects indicate neighbor count). In addition, the neighbor counts as well as the count distribution can be plotted.

Parameters:

image


Additional Functions

Add Labels to 3D ROI Manager

This adds all 2D or 3D labels as ROIs to the 3D ROI Manager from the magnificent 3D Suite by Thomas Boudier

image

In some cases this function might run a little unstable and ROIs might not directly be visible in the ROI Manager. Either try again or play with the Live ROI activation in the 3D ROI Manager.


Add Labels to 2D ROI Manager

This is based on a Groovy script from Bram van den Broek (@bramvdbroek) shown and discused here


Make 3D Image Isotropic

For some operations, isotropic voxels create better segmentation results due to how the individual methods are applied to the image. Therefore, it can be advantages to convert the image into one having isotropic voxels. This function is considering the actual calibration of the image e.g. in µm and reslices the volume to create those isotropic voxels. This however will apply liniear interpolation to the intensity values and change those. So, one needs to choose between best segmentation result or most original intensity values in some cases.


Convoluted Background Subtraction

The equivalent function to the original convoluted background subtraction is already on the todo list


Citation

If you use this library and its functions to generate and publish results, please condider to acknowledge and cite the toolbox using the DOI.

DOI


Issues

https://github.com/biovoxxel/bv3dbox/issues


Contact

via e-mail via BioVoxxel gitter channel


Acknowledgement

The BioVoxxel 3D Box funtions are heavily based and rely strongly on the CLIJ library family. Therefore, this development would have not been possible without the work of Robert Haase and colleagues.

Robert Haase, Loic Alain Royer, Peter Steinbach, Deborah Schmidt, Alexandr Dibrov, Uwe Schmidt, Martin Weigert, Nicola Maghelli, Pavel Tomancak, Florian Jug, Eugene W Myers. CLIJ: GPU-accelerated image processing for everyone. Nat Methods (2019)

J. Ollion, J. Cochennec, F. Loll, C. Escudé, T. Boudier. (2013) TANGO: A Generic Tool for High-throughput 3D Image Analysis for Studying Nuclear Organization. Bioinformatics 2013 Jul 15;29(14):1840-1. http://dx.doi.org/10.1093/bioinformatics/btt276