0% found this document useful (0 votes)
7 views8 pages

Al Kofahi2007 - Improved Detection of Branching Points

This article presents algorithms aimed at enhancing the detection and accuracy of estimating branching points in automated neuron tracing from 3D confocal images. The proposed method improves the detection rate from 37% to 86% and reduces the average error in locating branch points. This advancement facilitates more precise neuroanatomical analysis and counting of branch points, which is crucial for various neuroanatomic and toxicological studies.

Uploaded by

609898697
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Al Kofahi2007 - Improved Detection of Branching Points

This article presents algorithms aimed at enhancing the detection and accuracy of estimating branching points in automated neuron tracing from 3D confocal images. The proposed method improves the detection rate from 37% to 86% and reduces the average error in locating branch points. This advancement facilitates more precise neuroanatomical analysis and counting of branch points, which is crucial for various neuroanatomic and toxicological studies.

Uploaded by

609898697
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Original Article

Improved Detection of Branching Points


in Algorithms for Automated Neuron Tracing
from 3D Confocal Images

Yousef Al-Kofahi,1 Natalie Dowell-Mesfin,2 Christopher Pace,2 William Shain,2


James N. Turner,2 Badrinath Roysam1*

 Abstract
1
Rensselaer Polytechnic Institute, Troy, Automated tracing of neuronal processes from 3D confocal microscopy images is essen-
New York 12180 tial for quantitative neuroanatomy and neuronal assays. Two basic approaches are
2
The Wadsworth Center, NY State described in the literature—one based on skeletonization and another based on sequen-
Department of Health, Albany, New York tial tracing along neuronal processes. This article presents algorithms for improving the
12201-0509 rate of detection, and the accuracy of estimating the location and process angles at
branching points for the latter class of algorithms. The problem of simultaneously
Received 29 December 2007; Revision detecting branch points and estimating their measurements is formulated as a general-
Received 24 August 2007; Accepted 31 ized likelihood ratio test defined on a spatial neighborhood of each candidate point, in
October 2007 which likelihoods were computed using a ridge detection approach. The average detec-
This article contains supplementary tion rate increased from from 37 to 86%. The average error in locating the branch
material available via the Internet at points decreased from 2.6 to 2.1 voxels in 3D images. The generalized hypothesis test
http://www.interscience.wiley.com/ improves the rate of detection of branching points, and the accuracy of location esti-
jpages/1552-4922/suppmat. mates, enabling a more complete extraction of neuroanatomy and more accurate
counting of branch points in neuronal assays. More accurate branch point morphome-
*Correspondence to: Prof. Badrinath
try is valuable for image registration and change analysis. ' 2007 International Society for
Roysam, JEC 7010, Rensselaer
Analytical Cytology
Polytechnic Institute, Troy, NY 12180-
3590, USA
 Key terms
Email: [email protected] automated neurite tracing; branch points; ridge detection; generalized likelihood ratio
Published online 7 December 2007 in test
Wiley InterScience (www.interscience.
wiley.com)
DOI: 10.1002/cyto.a.20499
THE goal of this work is to improve automated analysis of branching points in cyto-
logical structures such as neurites, and histological structures such as vasculature. By
© 2007 International Society for a branch point we mean a location where a process bifurcates. We are interested in
Analytical Cytology
correctly counting such locations in an image, being able to estimate the branch loca-
tions consistently. The analysis of branch points is of interest in many applications
(1–9). They are of interest in neuroanatomic studies (6), development studies (7),
endpoints in toxicological and screening assays (8,9), and also valuable as landmarks
for image registration (4,5).
Several algorithms are presented in the literature to segment and analyze tubular
structures such as neurites and vasculature (10–36). Broadly speaking, two different
approaches exist. The first is based on 2D/3D skeletonization algorithms (24–27). In
this approach, the image volume is first segmented or binarized to extract the fore-
ground structures of interest, and the resulting binary image is systematically thinned
to arrive at the skeleton of the neurite that is processed further. The second approach
is referred to as vectorization, or tracing (10–23). In this approach, a set of initial
‘‘seed’’ points are extracted from the image and the neurites are traced sequentially
from these seed points by exploiting their generalized tube geometry. This category is
usually fully automated such that initial seed points and their orientations are
selected automatically (10–17). In some instances, the user manually specifies the ini-

Cytometry Part A  73A: 3643, 2008


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

Figure 1. (A) shows a 2D projection of a small region extracted from a larger 3D image (B) illustrates a case of missed detection using the
previous algorithm (C) shows the result of using the proposed method.

tial points and directions, and then, the algorithm traces the foreground structures, rather than the entire image volume.
whole branching structure recursively (18–23). Their computation time scales favorably with growing image
The present work focuses on a specific aspect (branch size since the computational effort is proportional to the
points) in fully automated or ‘‘exploratory’’ tracing algorithms amount of structure in the image rather than the image size.
(10–17). These algorithms are attractive in terms of speed Skeletonization algorithms are, in principle, much more gen-
since they operate directly on the image data. They are also eral in concept and applicability, and several authors have
locally adaptive and more robust to image artifacts compared used them to also analyze secondary structures such as spines
to skeletonization algorithms since they operate mostly on the (26). In practice, they are susceptible to generating small

Cytometry Part A  73A: 3643, 2008 37


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

‘‘barb’’ like artifacts for noise-caused surface irregularities in


the image segmentation.
The reason why automated tracing algorithms perform
poorly at branch points is simple—they are based on a geo-
metric model (generalized tubes) that is poorly satisfied
around branch points, leading to localized tracing errors. This
issue was described and addressed for the retinal vessel tracing
problem by Tsai et al. (1), who presented a model-based algo-
rithm, termed the exclusion region and position refinement
(ERPR), to improve the accuracy and repeatability of estimat-
ing the locations of branching points from 2D images. The
ERPR algorithm assumes that the branch points have been
detected already, and only refines their location and angular
measurements in 2D space. In other words, the ERPR method
does not improve the rate of detection of branch points, while
also being two-dimensional.
The present work makes several new contributions. First,
it improves the primary rate of detection of branching points.
Second, it addresses the problem of accurately estimating the
locations of branching points. Accurate locations lead to
improved estimation of the intersection angles of the neurites.
Finally, it allows both improved detection as well as location/
angle estimation in three-dimensional space.
Although our methods can be adapted to any tracing
algorithm, our description here is an extension of the explora-
tory tracing algorithms presented by Al-Kofahi et al. (11). The
presented algorithms mainly improve the detection rate, and
secondarily, the accuracy of branch point location and angle
estimates. As a motivating example, Figure 1A shows an x-y
projection of a small volume extracted from a larger 3D image.
Figure 1B illustrates a case of missed detection using the previ-
ous algorithm. Figure 1C shows the result of using the pro-
posed method. The rest of the article describes our methodol-
ogy, and its performance assessment. The steps used in this
work are listed in the flow chart shown in Figure 2.

MATERIALS AND METHODS


Specimen Preparation and Imaging Protocols
This section describes materials and methods for 3D ima-
ging and image analysis. The supplementary document
describes the same for 2D imaging and analysis of cultured
neurons that may be of interest to many readers. The brain tis-
sue slices were obtained from Wistar rats. After sedation, each
animal was perfused transcardially with phosphate buffered Figure 2. A flow chart showing the steps used in our algo-
saline followed by 4% paraformaldehyde and 4% sucrose in rithm.
0.1 M phosphate buffer. The brains were removed and post-
fixed for 1–4 h. The visual cortex was blocked, embedded in
4% agar to provide structural support during the injection, were selected randomly, and spaced apart sufficiently to
sectioned with an Oxford Vibratome to produce 600-lm thick include a single neuron in each field. Immediately after the
tissue slices. These were collected in 0.1 M phosphate buffer last cell in a slice was filled, the slice was incubated in 4% para-
and placed on the stage of an Olympus microscope equipped formaldehyde at 48C for 2–18 h and then resectioned into 250 lm
for epifluorescence microscopy. Individual neurons were tissue slices. All subsequent processing was done at room tem-
impaled with a glass micropipette and injected with 4% Alexa perature with continuous agitation. Sections were placed in
594 (Molecular Probes, now Invitrogen, Portland, OR) in dis- phosphate buffer containing 3% normal goat serum and 2%
tilled H2O. Typically, 15 or more cells can be injected in a sin- Triton-X 100 for 1 h, rinsed and incubated for 3–16 h in ABC
gle slice, and 200 or more cells in a single animal. The cells (Vector Elite) in phosphate buffer with 0.6% Triton-X 100 and

38 Branch Point Detection in Automated Neuron Tracing Algorithms


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

Figure 3. (A) The volume transformation process. (B) Illustrates the 3D template used in the likelihood ratio test.

0.5% BSA. The sections were rinsed and incubated in 0.05% noise, artifacts, or nonuniform staining. These endpoints were
DAB in TRIS buffer for 30 min, and then in a DAB/glucose subjected to the test described in earlier section.
oxidase solution for 25–60 min. Sections were mounted in
50% glycerol/50% phosphate buffer with a flat plastic spacer Selecting Candidate Points for Detection
between two coverslips to minimize distortion. The images and Refinement
were collected using NORAN Oz confocal attachment Each trace endpoint was evaluated to determine whether
mounted on an Olympus IX-70 inverted infinity corrected or not a branch point might exist. The points that passed this
microscope. A long-working-distance water immersion 403 step were termed ‘‘candidate points.’’ To reduce the computa-
lens (Zeiss 46 17 02, NA 1.15) with a field size of 192 lm 3 tion associated with spatial transformation operations over 3D
180 lm and 0.375 lm/pixel. The optical sections were spaced volumes, we transformed a small volume of neighboring vox-
0.5-lm apart, which is less than the depth of field of the lens, els around each endpoint to a standardized pose, as illustrated
so the data was finely sampled. Some images were deconvolved in Figure 3A. In this standardized pose, a local extrapolation
using the blind deconvolution software of NORAN running (based on five previously traced points) of the trace over a dis-
on an SGI Origin with an R10000 processor. To assess the per- tance lmax was mapped to the x axis. Here, lmax was not less
formance of the proposed methods, a set of 15 3D neuronal than the maximum of the vertical and horizontal neurite
images were tested. The images, in general, were of varying widths obtained from the tracing results. The size of the vol-
planar dimensions and depths. The number of optical sections ume can be set by the user, but the only requirement for the
collected varied from 30 to more than 300. Also, each of these size was that it fully includes the branch point even for the
images contains 1 neuron and the average numbers of neurites thickest expected neurites. For the examples shown here, this
and branches per image (neuron) were 27 and 23, respectively. volume was typically lmax 3 21 3 21 voxels. A larger choice of
this volume (side longer than 21) incurred a greater computa-
Initial Automated 3D Tracing of Neurites tional cost without improving accuracy/performance. Too
The 3D images were traced using the algorithm described small a choice will result in missed branch points. Note that
in (11), but with the following improvement: the robust me- some of the transformed points could have noninteger coordi-
dian response of the correlation templates was used rather nates, so we used bilinear interpolation to fit the volume to in-
than the average, following the work of Abdul-Karim et al. teger coordinates. An endpoint is considered a candidate if
(16). The resulting traces were processed to extract all end there is at least one point from another traced segment inside
points. Some endpoints resulted from reaching the natural the lmax 3 21 3 21 volume. The selected candidate points are
end of a neurite, or from reaching a branch point. Others subjected to a generalized hypothesis testing based detection
represent gaps in traces of neurites that resulted from imaging and refinement step described in earlier section.

Cytometry Part A  73A: 3643, 2008 39


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

 
Computing Foreground and Background Likelihoods  @2
To be able to detect branch points, we computed the like- fij ðXÞ ¼ ðf Gij ÞðXÞ with Gij ðXÞ ¼ G ðXÞ ð1Þ
@i @j
lihood of each pixel/voxel to fall in either the foreground or
the background. The likelihood values were computed using a
3D extension of the 2D ridge detector proposed by Meijering where * denotes special convolution, X is a voxel position, and
et al. (19). Given a 3D image f and a normalized 3D Gaussian i and j can be x, y, or z. Then, we form the Hessian-like matrix
filter G, we first compute at each pixel/voxel X as follows:

2 3
fxx ðXÞ þ a2 fyy ðXÞ þ a2 fzz ðXÞ ð1  aÞfxy ðXÞ ð1  aÞfxz ðXÞ
0
Hf ðXÞ ¼ 4 ð1  aÞfxy ðXÞ fyy ðXÞ þ a2 fxx ðXÞ þ a2 fzz ðXÞ ð1  aÞfyz ðXÞ 5 ð2Þ
ð1  aÞfxz ðXÞ ð1  aÞfyz ðXÞ fzz ðXÞ þ a2 fxx ðXÞ þ a2 fyy ðXÞ

where a is a parameter whose suggested optimal value by (19) First, we selected a template of voxels T representing the
is 1
3 . The eigenvectors V(X) and eigenvalues k(X) of the ma- assumed untraced part of a neurite segment. The template was
trix above are then computed, and then we computed the fore- represented by a volume of size lmax 3 3 3 3 shown in Figure
ground likelihood probability (neuriteness) at each voxel X as 3B. For simplicity, each template was represented by a vector
follows Y 5 (Y1, . . . ,Yn)T holding the voxels’ intensities. The two
( hypotheses H0 and H1 were described using probabilities
kmax ðXÞ=kmin if kmax ðXÞ < 0 P0(Y) and P1(Y), respectively. Assuming that (Y1, . . . ,Yn) were
qðXÞ ¼ ð3Þ
0 if kmax ðXÞ < 0 independent and identically distributed (i.i.d.), the probability
for all of the voxels inside T to satisfy any of the hypotheses
was derived by simple multiplication as follows:
where kmax(X) is the eigenvalue with the largest magnitude at
the current pixel/voxel X, kmin and is the smallest eigenvalue Y
n
over all the voxels in the image. Finally, the likelihood of each Pi ðY Þ ¼ Pi ðYk Þ; i ¼ 0; 1 ð4Þ
voxel X to be in the background is computed as 1 2 q(X). k¼1

One drawback of using this neuriteness measure is that it


is computationally expensive, since it involves computing the The likelihood ratio function was then written as:
eigenvalues of the Hessian matrix at each voxel in the image. Q
n Q
n
We first implemented the neuriteness algorithm for 2D images P1 ðYk Þ qðYk Þ
and then extended that to 3D images. In 2D images of hun- LðY Þ ¼ k¼1
Qn ¼ Q
n
k¼1
ð5Þ
dreds of thousands to few millions of pixels this operation P0 ðYk Þ ð1  qðYk ÞÞ
took from a few seconds up to a minute. However, this opera- k¼1 k¼1

tion took from few minutes up to an hour when applied on


3D images of tens of millions to hundreds of millions of vox- In three-dimensional space, directions are represented by two
els. Our solution was to limit computation of the neuriteness angles yh (left-right) and yv (up-down). The template is initially
values to voxels inside the lmax 3 21 3 21 volumes at each oriented along the x-axis as a result of the 3D transformation
candidate point. The size of each volume varies depending on process described in earlier section and the initial y- and z-
lmax, but it is usually few thousands of voxels. As an example, orientations are set to zero. After that we tested the template
processing a 3D image with 25 candidate points, requires using 16 different directions by rotating the template by yh and
computing the neuriteness values for 25 small volumes, i.e. a yv in increments of 7.58, i.e., {yh, yv} [ {(08, 08), (08, 7.58), (08,
total of few hundreds of thousands of voxels, which can be 158), (7.58, 08), (7.58, 7.58), (158, 08), (158, 158)}.
processed in less than a minute. In our test, we aimed to select the direction that maximizes
the respective likelihood ratio functions. The generalized like-
lihood ratio test (LRT) was then formulated as follows:
Detecting Branch Points by Generalized
Hypothesis Testing H1
Q
n
The algorithm used for extracting branch points is based qðYk jhy ; hz Þ >
on a generalized likelihood ratio test (GLRT). This test has k¼1
max Q
n s ð6Þ
ðhh ;hv Þ
been used by Mahadevan et al. (37), for vessel detection. We ð1  qðYk jhy ; hz ÞÞ <
searched for an untraced neurite segment in the lmax 3 21 3 k¼1 H0
21 volume starting from the candidate. In our test we decided
among two possible hypotheses H0 and H1, where the null hy- where s is a user selected threshold in the range from 0 to 1.
pothesis H0 indicated the absence of a branch, while the alter- For most images, a value of 1 was used, corresponding to the
native hypothesis H1, indicated the presence of a branch. case of balanced prior probabilities. For the lower-signal

40 Branch Point Detection in Automated Neuron Tracing Algorithms


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

Figure 4. Examples of branch points in 3D images shown as x, y, and z projections; the left set of projections shows results from the previ-
ous method and the right set shows results from the new method. For each image set, the three projections are shown, and some
branches are marked with arrows of different colors for different branch points. Traces are shown in green and branches are shown in red,
and the numbers indicate segment numbers generated by the automatic tracing algorithm. The blue dots indicate seed points used by the
automatic tracing algorithm.

images, it was lowered to 0.8 to increase the detection rate, at RESULTS AND VALIDATION
the expense of raising the rate of false positives. We used a A set of 15 3D neuronal images were tested with typical
lower and an upper bounds on the likelihoods such that examples are presented in this section. In addition, some 2D
q [ [0.01, 0.99] to avoid multiplication or division by zero. sample results based on cultured neurons can be found in the

Cytometry Part A  73A: 3643, 2008 41


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

electronic supplement. The algorithms were implemented on The proposed computation only occurs at endpoints of traced
MATLAB and on C11 as part of the 3D tracing software. segments, so its application across the image space is decidedly
Since our method processes neighboring pixels/voxels at each sparse compared to methods that process each pixel/voxel in
candidate point only, the processing time depends on the the image. The proposed methods can be readily adapted to
complexity of the structure and the expected number of other tube-like structures in fluorescence images, for instance,
branch points. microvasculature (39). Although our investigation was neces-
Representative 3D examples are presented in Figure 4. sarily based on automated tracing algorithms reported by this
The left set of 3D projections shows results from the previous group, the issues we describe are germane to other approaches
method and the right set present results from the current as well.
method. Both images were superimposed on the x-y, x-z, and The computational methods described in this article are
y-z projections. The output of the automated tracing is shown robust to depth-dependent attenuation as long as the fore-
in green and the branches are shown in red in both methods. ground signal exceeds the background. However, as with any
Also, some arrows with different colors are used to identify image segmentation algorithm, the results ultimately limited
different branch points. The detection rate and the accuracy of by image quality. Any methods to improve the depth of ima-
the proposed algorithms were evaluated by a human observer ging and controlling signal attenuation can only improve our
and compared with the results from the previous merging automated results.
method using 100 manually selected true branch points. The
accuracy of detection is measured by finding the average of
LITERATURE CITED
the distances between the correctly detected branch points and
1. Tsai C, Stewart C, Tanenbaum H, Roysam B. Model-based method for improving the
their true locations, where the true locations were found accuracy and repeatability of estimating vascular bifurcations and crossovers from
retinal fundus images. IEEE Trans Inf Technol Biomed 2004;8:142–153.
manually. The average detection rate increased from 37 to
2. Can A, Stewart C, Roysam B, Tanenbaum H. A feature-based robust hierarchical
86% and the average error in location estimation decreased algorithm for registration pairs of images of the curved human retina. IEEE Trans
Pattern Anal Mach Intell 2002;24:347–364.
from 2.6 to 2.1 voxels. While the broad significance of the
3. Can A, Stewart C, Roysam B, Tanenbaum H. A feature-based algorithm for joint, lin-
increased detection rate is obvious, the improvement in the ear estimation of high-order image-to-mosaic transformations: Mosaicing the curved
human retina. IEEE Trans Pattern Anal Mach Intell 2002;24:412–419.
location estimation error is more specialized in terms of value.
4. Al-Kofahi O, Can A, Lasek A, Szarowski D, Turner J, Roysam B. Hierarchical algo-
This is of value to image registration and change analysis algo- rithms for affine 3-D registration of neuronal images acquired by confocal laser scan-
ning microscopy. J Microsc 2003;211:8–18.
rithms. It is well known that small registration errors produce
5. Can A, Al-Kofahi O, Lasek S, Szarowski D, Turner J, Roysam B. Attenuation correc-
significant amounts of change detection errors. Actually the tion in confocal laser microscopes: A novel two-view approach. J Microsc
2003;211:67–79.
consistency of branch location estimation resulting from the
6. Ascoli G, Krichmar J, Nasuto S, Senft S. Generation, description, and storage of den-
objective automation is of greater value. It is also valuable for dritic morphology data. Philios Trans R Soc Lond B Biol Sci 2001;356:1131–1145.
more consistent estimation of branch angles if needed for an 7. Al-Kofahi O, Radke R, Roysam B, Banker G. Automated semantic analysis of changes
in image sequences of neurons in culture. IEEE Trans Biomed Eng 2006;53:1109–
investigation (not pursued here). The detection rates and ac- 1123.
curacy were used as the comparison criteria. We used ImageJ 8. Kerrison J, Lewis R, Otteson D, Zack D. Bone morphogenetic proteins promote neur-
ite outgrowth in retinal ganglion cells. Mol Vis 2005;11:208–215.
to perform the comparisons where the original and the result- 9. Forgie A, Wyatt S, Correll PH, Davies AM. Macrophage stimulating protein is a
ing images were opened side by side and each branch point target-derived neurotrophic factor for developing sensory and sympathetic neurons.
Development 2003;130(5):995–1002.
was inspected visually by using the slices navigator and the 10. Al-Kofahi K, Lasek S, Szarowski D, Pace C, Nagy G, Turner J, Roysam B. Rapid auto-
zooming capabilities in ImageJ. mated three-dimensional tracing of neurons from confocal image stacks. IEEE Trans
Inf Technol Biomed 2002;6:171–187.
11. Al-Kofahi K, Can A, Lasek S, Szarowski D, Dowell N, Shain W, Turner JN, Roysam B.
Median based robust algorithms for tracing neurons from noisy confocal microscope
images. IEEE Trans Inf Technol Biomed 2003;7:302–317.
DISCUSSION 12. Can A, Shen H, Turner J, Tanenbaum H, Roysam B. Rapid automated tracing and
The primary outcome of this work is increased rate of feature extraction from live high-resolution retinal fundus images using direct ex-
ploratory algorithms. IEEE Trans Inf Technol Biomed 1999;3:125–138.
detection of branching points in automated three-dimensional 13. Gang L, Chutatape O, Krishnan S. Detection and measurement of retinal vessels in
neurite tracing algorithms, resulting in traces that are more fundus images using amplitude modified second-order Gaussian filter. IEEE Trans
Biomed Eng 2002;49:168–172.
complete and accurate in these critical regions. This is valuable 14. Xiong G, Zhou X, Degterev A, Ji L, Wong S. Automated neurite labeling and analysis
for ensuring completeness of extracting neuronal topologies, in fluorescence microscopy images. Cytometry A 2006;69A:494–505.
15. Weaver C, Pinezich J, Lindquist W, Vazquez M. An algorithm for neurite outgrowth
and greater accuracy in neurite profiling, outgrowth, and toxi- reconstruction. J Neurosci Methods 2003;124:197–205.
cology assays that require counting of branch points (e.g., Cel- 16. Abdul-Karim M-A, Al-Kofahi K, Brown E, Jain R, Roysam B. Automated tracing and
change analysis of tumor vasculature from in vivo multi-photon confocal image time
lomics HCS). Secondarily, we have also refined extraction of series. J Microvasc Res 2003;66:113–125.
branch locations and angles. This is valuable for improving 17. Tyrrell J, Mahadevan V, Tong R, Roysam B, Brown E, Jain R. 3-D model-based com-
plexity analysis of tumor microvasculature from in vivo multi-photon confocal
the accuracy of automated image registration and mosaicing images. J Microvasc Res 2005;70:165–178.
(2–4) of images, especially for applications requiring auto- 18. van Cuyck J, Gerbrands J, Reiber J. Automated centerline tracing in coronary angio-
grams. Pattern Recognit Artif Intell 1988;7:169–183.
mated change analysis, since registration errors are falsely 19. Meijering E, Jacob M, Sarria JC, Steiner P, Hirling H, Unser M. Design and validation
detected as changes (7). of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytome-
try A 2004;58A:167–176.
From a practical standpoint, the benefits of our algorithm 20. Falcão A, Udupa J, Miyazawa F. An ultra-fast user-steered image segmentation para-
do not incur an undue computational cost, since the underly- digm: Live wire on the fly. IEEE Trans Med Imaging 2000;19:55–62.
ing exploratory neurite tracing algorithms are extremely effi- 21. Flasque N, Desvignes M, Constans J, Revenu M. Acquisition, segmentation and track-
ing of the cerebral vascular tree on 3D magnetic resonance angiography images. Med
cient, robust (11), and amenable to automatic tuning (38). Image Anal 2001;5:173–183.

42 Branch Point Detection in Automated Neuron Tracing Algorithms


15524930, 2008, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/cyto.a.20499, Wiley Online Library on [24/07/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
ORIGINAL ARTICLE

22. Wink O, Niessen W, Viergever MA. Multiscale vessel tracking. IEEE Trans Med Ima- 32. Huang A, Nielson G, Razdan A, Farin G, Baluch D, Capco D. Thin structure segmen-
ging 2004;23:130–133. tation and visualization in three-dimensional biomedical images: A shape-based
23. Schmitt S, Evers J, Duch C, Scholz M, Obermayer K. New methods for the computer- approach. IEEE Trans Vis Comput Graph 2006;12:93–102.
assisted 3-D reconstruction of neurons from confocal image stacks. Neuroimage 33. Roysam B, Lin G, Abdul-Karim M, Al-Kofahi O, Al-Kofahi K, Shain W, Szarowski D,
2004;23:1283–1298. Turner J. Automated 3-D image analysis methods for confocal microscopy. In: Pauley
24. Cohen A, Roysam B, Turner J. Automated tracing and volume measurements of J, editor. Handbook of Confocal Microscopy; 2006: Chapter 15, pp 316–337, Third
neurons from 3-D confocal fluorescence microscopy data. J Microsc 1994;173:103–114. Ed. New York: Springer.
25. He W, Hamilton T, Cohen A, Holmes T, Pace C, Szarowski D, Turner J, Roysam B. 34. Jiang X, Mojon D. Adaptive local thresholding by verification-based multithreshold
Automated three-dimensional tracing of neurons in confocal and brightfield images. probing with application to vessel detection in retinal images. IEEE Trans Pattern
Microsc Microanal 2003;9:296–310. Anal Mach Intell 2003;25:131–137.
26. Koh I, Lindquist W, Zito K, Nimchinsky E, Svoboda K. An image analysis algorithm 35. Lowell J, Hunter A, Steel D, Basu A, Ryder R, Kennedy R. Measurement of retinal ves-
for dendritic spines. Neural Comput 2003;14:1283–1310. sel widths from fundus images based on 2-D modeling. IEEE Trans Med Imaging
2004;23:1196–1204.
27. Weaver C, Hof P, Wearne S, Lindquist W. Automated algorithms for multiscale mor-
phometry of neuronal dendrites. Neural Comput 2004;16:1353–1383. 36. Chen J, Amini A. Quantifying 3-D vascular structures in MRA images using hybrid
PDE and geometric deformable models. IEEE Trans Med Imaging 2003;23:1251–
28. Staal J, Abramoff M, Niemeijer M, Viergever M, van Ginneken B. Ridge-based vessel 1262.
segmentation in color images of the retina. IEEE Trans Med Imaging 2004;23:501–509.
37. Mahadevan V, Narasimha Iyer H, Roysam B, Tanenbaum H. Robust model-based
29. Maddah M, Afzali-Kusha A, Soltanian-Zadeh H. Efficient center-line extraction for vasculature detection in noisy biomedical images. IEEE Trans Inf Technol Biomed
quantification of vessels in confocal microscopy images. Med Phys 2003;30:204–211. 2004;8:306–376.
30. Wearne S, Rodriguez A, Ehlenberger D, Rocher A, Hendersion S, Hof P. New techni- 38. Abdul-Karim MA, Roysam B, Dowell N, Jeromin A, Yuksel M, Kalyanaraman S.
ques for imaging, digitization and analysis of three-dimensional neural morphology Automatic selection of parameters for vessel/neurite segmentation algorithms. IEEE
on multiple scales. Neuroscience 2005;136:661–680. Trans Image Process 2005;14:1338–1350.
31. Gratama van Andel HAF, Meijering E, van der Lugt A, Vrooman H, de Monyé C, 39. Tyrrell J, di Tomaso E, Fuja D, Tong R, Kozak K, Brown E, Jain R, Roysam B. Robust
Stokking R. Evaluation of an improved technique for automated center lumen line 3-D modeling of vasculature imagery using superellipsoids. IEEE Trans Med Imaging
definition in cardiovascular image data. Eur Radiol 2004;16:391–398. 2007;26:223–237.

Cytometry Part A  73A: 3643, 2008 43

You might also like