C-Arm Positioning Using Virtual Fluoros
C-Arm Positioning Using Virtual Fluoros
1
Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD,
2
Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore MD
3
Siemens Healthineers, Erlangen Germany,
4
Orthopaedic Surgery, Johns Hopkins University, Baltimore MD
5
Department of Neurosurgery, Johns Hopkins University, Baltimore MD
ABSTRACT
Introduction: Fluoroscopically guided procedures often involve repeated acquisitions for C-arm positioning at the cost of
radiation exposure and time in the operating room. A virtual fluoroscopy system is reported with the potential of reducing
dose and time spent in C-arm positioning, utilizing three key advances: robust 3D-2D registration to a preoperative CT;
real-time forward projection on GPU; and a motorized mobile C-arm with encoder feedback on C-arm orientation.
Method: Geometric calibration of the C-arm was performed offline in two rotational directions (orbit α, orbit ). Patient
registration was performed using image-based 3D-2D registration with an initially acquired radiograph of the patient. This
approach for patient registration eliminated the requirement for external tracking devices inside the operating room,
allowing virtual fluoroscopy using commonly available systems in fluoroscopically guided procedures within standard
surgical workflow. Geometric accuracy was evaluated in terms of projection distance error (PDE) in anatomical fiducials.
A pilot study was conducted to evaluate the utility of virtual fluoroscopy to aid C-arm positioning in image guided surgery,
assessing potential improvements in time, dose, and agreement between the virtual and desired view.
Results: The overall geometric accuracy of DRRs in comparison to the actual radiographs at various C-arm positions was
PDE (mean ± std) = 1.6 ± 1.1 mm. The conventional approach required on average 8.0 ± 4.5 radiographs spent “fluoro
hunting” to obtain the desired view. Positioning accuracy improved from 2.6 o ± 2.3o (in ) and 4.1o ± 5.1o (in ) in the
conventional approach to 1.5o ± 1.3o and 1.8o ± 1.7o, respectively, with the virtual fluoroscopy approach.
Conclusion: Virtual fluoroscopy could improve accuracy of C-arm positioning and save time and radiation dose in the
operating room. Such a system could be valuable to training of fluoroscopy technicians as well as intraoperative use in
fluoroscopically guided procedures.
1. INTRODUCTION
Fluoroscopy is a common imaging modality for guiding surgical procedures. Many orthopaedic, neuro-, and ortho-trauma
procedures typically require radiographic visualization of anatomy-specific views with surgical instrumentation and
implants. In obtaining the desired view, repeated C-arm fluoroscopy images are often acquired, where radiology
technicians use a trial-and-error approach of ‘fluoro hunting’ at the expense of time and radiation exposure to the patient
as well as personnel. To save time and radiation dose in the operating room, the methods proposed in this work generate
virtual fluoroscopy to assist the surgeon and/or radiology technician in C-arm positioning using a preoperative CT image
that is commonly available for patients who undergo surgery.
Fluoroscopy simulation methods that have been previously proposed relied upon external tracking systems to align the
patient position relative to the C-arm imaging coordinate system and were primarily intended for surgical training purposes
[1], [2]. Considering the task of C-arm positioning, the use and adaptation of external tracking systems are challenging due
to the line-of-sight requirements and the addition of cumbersome hardware equipment in the operating rooms. As a result,
Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling, edited by
Robert J. Webster III, Baowei Fei, Proc. of SPIE Vol. 10135, 101352K · © 2017 SPIE
CCC code: 1605-7422/17/$18 · doi: 10.1117/12.2256028
2. METHODS
2.1 Digitally reconstructed radiograph (DRR) generation
Using a preoperative CT image of the patient, virtual fluoroscopy is generated by computing a simulated x-ray image
referred to as a digitally reconstructed radiograph (DRR). To generate DRRs, CT data in Hounsfield units (HU) are first
converted to linear attenuation coefficient (mm-1) using the attenuation coefficient of water (µwater) and a ray-tracing based
tri-linear interpolation method implemented according to [10]. GPU-based parallel implementation using C++/CUDA was
devised for fast and real-time computation of DRRs. Generating accurate DRRs resembling a radiograph at a given C-arm
position depends on estimating the relative position between the C-arm and the patient in the world coordinate system (𝑇𝑤 )
of the operating room. To achieve this, the motion of the C-arm is measured during its manipulation using mechanical
encoders attached to certain degrees-of-freedom (DoF) of the C-arm. In this work, we measured the two major rotational
DoFs; (1) rotations within the plane of C-arm gantry (henceforth referred to as ‘orbital’ and denoted by α), (2) rotations
perpendicular to the plane of C-arm gantry (henceforth referred to as ‘angular’ and denoted by β). To calculate the relative
transform (𝑇𝑝𝑑 ) between the patient (p) and the C-arm detector (𝑑), geometric calibration of the C-arm and patient
registration are necessary.
x DRR Generation
Tp
z y )
z
Patient Registration
Figure 1: DRR generation using angle encoder readings (α and β) from the C-arm. Transformation of the patient relative to the C-arm
detector 𝑇𝑝𝑑 is computed using initial geometric calibration and patient registration steps.
2.4 Experiments
A pilot study was performed using a mobile C-arm (Cios Alpha, Siemens Healthcare, Erlangen, Germany) to assess the
utility of simulated fluoroscopy in comparison to the conventional ‘fluoro-hunting’ approach. Patient registration was
performed using an initially acquired posterior-anterior (PA) radiograph of an anthropomorphic thorax phantom. The C-
arm was then positioned by varying the rotations in orbital and angular directions in 10 o increments within a range -40o <
𝛼 < 40o and 0o < 𝛽 < 40o. The acquired radiographs for each C-arm position were assessed with the corresponding virtual
fluoroscopy image to evaluate geometric accuracy. The accuracy was quantified by calculating the projection distance
error (PDE) using the manually identified fiducials between the radiographs and the CT image.
(A)
Virtual fluoroscopy
display
w'----'-
3. RESULTS
3.1 Geometric accuracy assessment
(A) (B) Radiograph DRR
Angular Orbital
Figure 3: A: PDE distributions at fixed orbital and various angular values and fixed angular and various orbital positions. B:
Comparison of a radiographs (left) and the corresponding DRR (right) showing the similarity of the simulated and actual radiograph.
Canny edges from the DRR are shown in yellow on the actual radiograph.
The accuracy of the geometric calibration of the C-arm for the orbital and angular rotations was quantified using manually
identified corresponding pairs of anatomical locations was PDE (mean ± std) = 1.1 ± 0.9 mm. The overall accuracy of
generating DRRs in comparison to the actual radiographs at different orbital and angular C-arm positions was found to be
PDE = 1.6 ± 1.1 mm. Figure 3A shows PDE distributions separately across variations in angular positions with at a fixed
orbital positions and variations in orbital positions with fixed angular positions. Considering the non-isocentric nature of
the C-arm motions in the orbital direction, the geometric calibration and patient registration using image-based registration
achieved successful performance in both the directions with comparable accuracies. Figure 3B illustrates the alignment of
a radiograph and the corresponding DRR qualitatively. Such similarity in the image pair facilitates virtual fluoroscopy to
be used as a guidance tool for accurate C-Arm positioning.
Figure 4: A: Orbital and angular error distributions for standard and conventional approaches. B: NCC distributions calculated
between obtained and desired views for the two methods.
Figure 5: Variability among operators in obtaining the five target views using conventional and simulation methods. Canny edges
extracted from the obtained image of each operator are overlaid in a separate color on the ground truth image. Note the dispersion of
edges in the Conventional approach compared to the more reproducible and accurate edges in the Simulation (virtual fluoroscopy).
4. CONCLUSIONS
This work demonstrated accurate methods for generating virtual fluoroscopy using image-based registration for patient
registration and the geometric calibration of the C-arm. The pilot study indicated that the system could potentially decrease
the number of views required to position the C-arm during surgery and aid in improving the geometric accuracy of
positioning the C-arm to obtain an anatomically specific view. With this approach, the patient registration can be updated
using each radiograph acquired during the procedure to compensate for any motion during surgery. This approach for
virtual fluoroscopy does not add external hardware (e.g., trackers) or other equipment in the operating room and thus has
the potential to translate to clinical use with systems already within the surgical arsenal and within standard OR workflow.
ACKNOWLEDGEMENTS
This work was supported by NIH Grant No. R01-EB-017226 and academic-industry collaboration with Siemens
Healthcare (XP Division, Erlangen Germany). The authors extend their thanks to Jessica Wood, Bonnie Grantland, Lauryn
Hancock, Aris Thompson, Julia Stupi, and Shewaferaw Lema (Department of Radiology) for valuable discussion and
participation in the user study.
REFERENCES
[1] R. H. Gong, B. Jenkins, R. W. Sze, and Z. Yaniv, “A Cost Effective and High Fidelity Fluoroscopy Simulator using the Image-
Guided Surgery Toolkit (IGSTK),” Med. Imaging 2014 Image-Guided Proced. Robot. Interv. Model., vol. 9036, p. 11, 2014.
[2] O. J. Bott, K. Dresing, M. Wagner, B.-W. Raab, and M. Teistler, “Informatics in radiology: use of a C-arm fluoroscopy
simulator to support training in intraoperative radiography.,” Radiographics, vol. 31, pp. E64–E74, 2011.
[3] R. Munbodh, Z. Chen, D. A. Jaffray, D. J. Moseley, J. P. Knisely, and J. S. Duncan, “Automated 2D-3D registration of portal
images and CT data using line-segment enhancement,” Med Phys, vol. 35, no. 10, pp. 4352–4361, 2008.
[4] T. De Silva, A. Uneri, M. D. Ketcha, S. Reaungamornrat, G. Kleinszig, S. Vogt, N. Aygun, S.-F. Lo, J.-P. Wolinsky, and J. H.
Siewerdsen, “3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing
robustness to content mismatch,” Phys. Med. Biol., vol. 61, no. 8, pp. 3009–3025, Apr. 2016.
[5] M. D. Ketcha, T. De Silva, A. Uneri, G. Kleinszig, S. Vogt, J.-P. Wolinsky, and J. H. Siewerdsen, “Automatic Masking for
Robust 3D-2D Image Registration in Image-Guided Spine Surgery,” in SPIE Medical Imaging, 2016.
[6] A. Uneri, T. De Silva, J. W. Stayman, G. Kleinszig, S. Vogt, A. J. Khanna, Z. L. Gokaslan, J.-P. Wolinsky, and J. H. Siewerdsen,
“Known-component 3D–2D registration for quality assurance of spine surgery pedicle screw placement,” Phys. Med. Biol.,
vol. 60, no. 20, pp. 8007–8024, Oct. 2015.
[7] A. Uneri, J. Goerres, T. De Silva, M. Jacobson, M. Ketcha, S. Reaungamornrat, G. Kleinszig, S. V. A. Khanna, J.-P. Wolinsky,
and J. Siewerdsen, “Deformable 3D-2D registration of known components for image guidance in spine surgery,” in Medical
image computing and computer-assisted intervention (MICCAI), 2016, p. in press.
[8] S.-F. L. Lo, Y. Otake, V. Puvanesarajah, A. S. Wang, A. Uneri, T. De Silva, S. Vogt, G. Kleinszig, B. D. Elder, C. R. Goodwin,
T. A. Kosztowski, J. A. Liauw, M. Groves, A. Bydon, D. M. Sciubba, T. F. Witham, J.-P. Wolinsky, N. Aygun, Z. L. Gokaslan,
and J. H. Siewerdsen, “Automatic localization of target vertebrae in spine surgery: clinical evaluation of the LevelCheck
registration algorithm.,” Spine (Phila. Pa. 1976)., vol. 40, no. 8, pp. E476-83, 2015.
[9] T. De Silva, S.-F. L. Lo, N. Aygun, D. M. Aghion, A. Boah, R. Petteys, A. Uneri, M. D. Ketcha, T. Yi, S. Vogt, G. Kleinszig,
W. Wei, M. Weiten, X. Ye, A. Bydon, D. M. Sciubba, T. F. Witham, J.-P. Wolinsky, and J. H. Siewerdsen, “Utility of the
LevelCheck Algorithm for Decision Support in Vertebral Localization,” Spine (Phila. Pa. 1976)., vol. 41, no. 20, pp. E1249–
E1256, Mar. 2016.
[10] B. Cabral, N. Cam, and J. Foran, “Accelerated volume rendering and tomographic reconstruction using texture mapping
hardware,” Proc. 1994 Symp. Vol. Vis., pp. 91–98, 1994.
[11] S. Ouadah, J. W. Stayman, G. J. Gang, T. Ehtiati, and J. H. Siewerdsen, “Self-calibration of cone-beam CT geometry using
3D–2D image registration,” Phys. Med. Biol., vol. 61, no. 7, p. 2613, 2016.