News |  Sitemap |  Contact
PDF Export  | 

An ImageJ Plugin for the Benchmarking of 3D Segmentation Algorithms and the Specification of Ground Truth Datasets

Segmentation performance metrics play an important role in the optimization of an automated image analysis workflow. We present an ImageJ benchmarking algorithm that objectively quantifies the deviation of a machine segmentation from the ground truth, a segmentation that is manually established by the human expert. The algorithm has two main modules: a benchmarking and a ground truth creating part. Three dimensional image stacks acquired by laser scanning confocal microscopy were used to test our prototype system. The benchmarking module of our algorithm aims to measure accuracy of image segmentation at the object and pixel level.
At the pixel level, the algorithm determines the correspondence between foreground and background regions. The number of true positive, false positive, true negative, and false negative pixels are counted thus true positive rate, false positive rate and F-measure and other common performance metrics are calculated for a 3D stack and 2D slices.
At the object level, the algorithm determines correspondence between regions of interest (ROI) in the reference (ground truth) and the target (machine segmentation) data set. For each slice, in addition to correctly identified ROIs, it quantifies four classes of segmentation errors: merged (many to one correspondence), split (one to many correspondence), false positive and false negative objects. Subsequently, overlapping ROIs of adjacent slices are linked to form 3D objects. The final step of the assessment at object level involves scoring correctly and incorrectly identified 3D objects and producing a breakdown of the 4 different classes of segmentation defects, which will assist us in troubleshooting automated segmentation algorithms.
The benchmarking process requires a labeled image stack for ground truth and machine segmentation. Given that individual 3D image stacks consist of 50-70 slices and 1024×1024 pixels, an entirely manual segmentation by the human expert is too labor intensive and, as such, impractical. The ground truth creation module of our algorithm aims to facilitate manual segmentation. Parameters for local segmentation are computed individually for each ROI by an initial adaptive segmentation, then edited manually. ROIs on subsequent slices are initialized using the parameters of overlapping ROIs in previous slices, thus speeding up the process of manual editing. The methodology is illustrated with examples of Drosophila embryonic nuclei in various phases of the cell cycle.

benchmarking, ground truth, 3D, segmentation algorithm evaluation, ImageJ

Janos Kriston-Vizi

Bioinformatics Institute, A-STAR, Singapore  


Short Biography   
Main carrer stages:
Post Doctoral Fellow: Bioinformatics Institute, Singapore (2007-)
Post Doctoral Fellow: Kyoto University, Japan (2005-2007)
PhD: Corvinus University of Budapest, Hungary (2000-2005)
MSc: Corvinus University of Budapest, Hungary (1994-2000)
Biology-related image processing and statistical analysis are the two common underlying threads running along Janos Kriston-Vizi's research career. In 2000, he graduated with MSc degree majoring in informatics at the Technical Department and Department of Mathematics.
In his PhD work, he continued working with image processing at the Corvinus University of Budapest. Meanwhile he was awarded a Japanese Governmental scholarship and conducted research at the Kyoto University as a research scholar for two years. The objective of his study was to reveal correlations between physiological (water stress) and physical (reflectance in visible, near-infrared, thermal bands) properties of biological material. After obtaining his PhD in 2005, he continued his research activities at the Kyoto University as a JSPS post doctoral fellow. He authored 16 publications. He joined the Bioinformatics Institute at the Biopolis in Singapore in 2007 as a post doctoral fellow, working in the Imaging Informatics Group headed by Martin Wasser.

© Luxembourg Institute of Science and Technology | Legal Notice