Abstract
Assessing the effectiveness of scoliosis surgery requires the quantification of 3D spinal deformities from pre- and post-operative radiographs. This can be achieved from 3D reconstructed models of the spine but a fast-automatic method to recover this model from pre- and post-operative radiographs remains a challenge. For example, the vertebrae’s visibility varies considerably and large metallic objects occlude important landmarks in postoperative radiographs. This paper presents a method for automatic 3D spine reconstruction from pre- and post-operative calibrated biplanar radiographs. We fitted a statistical shape model of the spine to images by using a 3D/2D registration based on convolutional neural networks. The metallic structures in postoperative radiographs were detected and removed using an image in-painting method to improve the performance of vertebrae registration. We applied the method to a set of 38 operated patients and clinical parameters were computed (such as the Cobb and kyphosis/lordosis angles, and vertebral axial rotations) from the pre- and post-operative 3D reconstructions. Compared to manual annotations, the proposed automatic method provided values with a mean absolute error <5.6° and <6.8° for clinical angles; <1.5 mm and <2.3 mm for vertebra locations; and <4.5° and <3.7° for vertebra orientations, respectively for pre- and post-operative times. The fast-automatic 3D reconstruction from pre- and post in-painted images provided a relevant set of parameters to assess the spine surgery without any human intervention.
This work was supported by NSERC, Canada Research Chairs, MEDTEQ − MITACS program and EOS Imaging company.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Three-dimensional (3D) quantitative evaluation of spine deformities is fundamental for the assessment of scoliosis surgery. Clinical parameters can be computed from 3D reconstructions from biplanar radiographs, helping the surgeons with patients’ follow-up and in assessing surgery’s clinical outcome [1]. Many difficulties arise from the scoliotic deformation present in pre-operative radiographs. The vertebrae are overlapped with soft tissues in the X-ray projection, and the vertebral axial rotation changes the vertebra appearance in radiographs. For postoperative radiographs, the presence of metallic hooks, rods and screws masking the spinal structures is challenging for reconstruction methods. Accurate clinical parameters extraction was addressed with semi-automatic methods [2, 3], however, they require non-negligible user supervision and training. For postoperative radiographs, the mean 3D reconstruction time was superior to 12 min [3] limiting their use in clinical workflows. Moreover, the occultation due to spine instrumentation degrades the reproducibility of results [3]. A fast semi-automatic method [4] has been proposed for decreasing user interaction time by using a statistical model of a landmarks-based vertebra representation. This representation, however, was too simple to determine the Cobb and kyphosis/lordosis clinical angles, since it did not describe the 3D orientations of the vertebral endplates.
An automatic method providing an estimate of a detailed 3D reconstruction from both pre-and post-operative biplanar radiographs would be highly beneficial. For pre-operative, a method for spine detection from 2D images using deep neural networks [5] was proposed. This method, however, does not allow to recover 3D clinical parameters and is not adapted to process postoperative radiographs. The method proposed by [6] detects the spine silhouette in both images to avoid the occultation problem due to the instrumentation. This method relies on a statistical model to recover the vertical position of vertebrae, making the estimation of vertebral endplates’ positions and orientations dependent on the statistical model and not on the images information.
In this paper, we propose a method for the automatic 3D spine reconstruction by combining two statistical priors for 3D/2D registration: (1) 3D global spine and vertebrae shape priors using a PCA-based statistical shape model (SSM), and (2) vertebra appearance priors using convolutional neural networks (CNN) and considering simultaneously biplanar image information. To prevent the registration process from diverging for postoperative radiographs, the metallic structures were detected and removed with an image in-painting method. Clinical parameters could then be automatically determined at pre-and post-surgery times. The method accuracy was assessed for vertebrae poses and clinical parameters using an expert-generated 3D reconstruction.
2 Automatic 3D Spine Reconstruction
A 3D/2D registration of vertebrae, based on CNN, is regularized via an SSM of the spine (Fig. 1.D). The SSM is constructed from a set of parameters representing the shape and pose of vertebrae. This process is applied to pre-operative radiographs (Fig. 1.A). To apply it to postoperative radiographs, the instrumentation is first detected and removed by an in-painting algorithm (Fig. 1.B and C).
2.1 Statistical Spine Model
An enriched geometrical representation composed of ellipses, lines, spheres and points described the shapes of the vertebrae and the pelvis. Pedicles and endplates were modeled by 3D ellipses (Fig. 1.D). As previously proposed in shape + pose models [7], these primitives’ positions were embedded in a local referential frame attached to vertebrae. The vertebra pose was composed of a translation vector from a global origin located at the middle of the two femoral heads, and three orientation parameters. The SSM training comprised a set of 470 subjects whereas 1016 parameters described the spine model (going from cervical seven to sacral endplate). The principal component analysis (PCA) provided the following linear generative model: \( s = \bar{s} + Bm \), where \( m \) was a vector of 236 parameters in latent space (describing 99% of data variance), \( B \) was the PCA basis, \( \bar{s} \) was the data mean and \( s \) was the resulting vector describing the current spine model. We construct a linear system of equations (Eq. 1) to solve for the optimal model parameters given vertebral body center (VBC) target locations (t). We noted \( B_{Q} \) the sub-matrix of \( B \) determined by the subset {Q} of location coordinates imposed via the detected locations by CNN (cf. Sect. 2.2). The parameter \( \lambda \) adjusted the tradeoff between target fitting and the model smoothness, and was determined experimentally.
The resulting spine model \( \hat{s} = \bar{s} + B\hat{m} \) provided the geometric primitives’ parameters used as deformation handles for an as rigid as possible 3D deformation [8] of a mesh generic model for detailed morpho-realistic 3D visualization of vertebra surfaces.
2.2 Vertebrae 3D/2D Registration Based on Convolutional Neural Networks
Three CNNs distributed in thoracic high (C7–T5), thoracic low (T6–T12), and lumbar (L1–L5) spine regions, were trained to predict the vertebral body center (VBC) image locations in a patch by regressing its relative displacement vector from the patch intensities [5, 9]. We studied the coefficient of determination \( r^{2} \) of three regression models and it was the CNN (\( r^{2} \) = 0.9) which provided the best results compared to multi-linear regression (\( r^{2} \) = 0.5) or DNN (\( r^{2} \) = 0.8). For training dataset generation, a set of patches were sampled randomly around the actual VBC. Due to the characteristics of the imaging device used [1], there is a correspondence for the vertical coordinate, therefore the patch vertical displacement sampling was the same for both views. To consider simultaneously both lateral and frontal radiograph information, we use a multi-channel CNN input to learn a joint model (Fig. 2.A). Using both views improves the proximo-distal mean error prediction of the VBC location with 1.4 ± 2.6 mm while it was 2 ± 3 mm when using only the frontal view. Using a patch size of [91 × 91] for each view, the input was thus a 3D tensor of size 2 × 91 × 91. The proposed CNN architecture contained three convolutional layers (with a kernel size of 5 × 5) followed by a dense layer of 500 units with rectified linear activation functions (Fig. 2). Once trained, the CNNs predicted the 2D displacements of the vertebral centers for each view. Using the epipolar geometry, the 3D stereo-corresponding point was computed as the intersection of the two lines of projections (Fig. 2.B) determined with image calibrations. This set of 3D positions t was then used to fit the SSM to patient data by inferring the shape and pose of each vertebra using the Eq. (1).
The strategy for 3D/2D registration was as follows. We initially placed the mean SSM spine model by detecting the origin of the pelvis, the sacral endplate and the femoral heads. These landmarks were detected with a multiresolution approach initialized from a coarse pelvis origin detection [5]. Then, from the positions given by the mean spine model, an inner loop of CNN predictions moved iteratively the VBC points to their optimal positions as the prediction tended to a null displacement. The main outer registration loop began by registering the lumbar vertebrae (areas with fewer soft tissue organs where there is the maximum probability of image information). The proximal thoracic vertebrae came next, and finally, all vertebral levels were fixed in the statistical spine model in order to progressively converge to the optimal 3D reconstruction.
2.3 Postoperative Image In-Painting
The postoperative image in-painting was composed of two steps: the automatic segmentation of the instrumentation and its replacement with pertinent information. The metallic instrumentation composed of two rods and screws appeared as large connected regions in radiographs. Noting that the instrumentation was locally brighter than its surroundings, a gradient-based edge segmentation algorithm was adopted. First, a median filter was applied to reduce the image noise while preserving the edges. A Sobel filter was then applied (Fig. 3.b) allowing to select the edges of the instrumentation when gradient amplitude was superior to a threshold of 10%. Mathematical morphological opening operation removed the potential outliers. A contour filling process created a mask M1 (Fig. 3.c) containing the spinal instrumentation, however, some elements like bones were still present. In order to remove the outliers inside the contours, we computed a mask M2, representing the background, by thresholding all the pixels under 40% of the intensity range (Fig. 3.d). The final mask was computed with the logical expression: M = M1 ^ (¬M2) (Fig. 3.e).
Image in-painting consists in filling missing information in images [10]. We used a diffusion-based in-painting algorithm as suggested by a study of different in-painting methods in the context of postoperative radiographs [11]. The diffusion-based in-painting algorithm filled the missing area with a diffusion of pixel values, which is consistent with the surrounding area (Fig. 3.g). Its main advantage is the low computational complexity. The algorithm used the heat diffusion equation to solve for the pixels’ intensity propagation of the surrounding area into the mask.
3 Experiments
The experiments used a set of 38 operated scoliotic patients (mean age 17 ± 3 years old, mean pre-operative Cobb angle 45°) for whom pairs of pre- and post-operative biplanar X-rays were collected retrospectively with ethics committee approval. Each dataset came with a 3D spine reconstruction and associated clinical parameters. The 3D reconstructions, serving as ground truth to evaluate the proposed automatic method, were performed by an expert using a semi-automatic method [2]. The proposed method was implemented in a homemade C++/OpenGL software, using the compute shaders GLSL language for fast CNN prediction. The average time for spine reconstruction was 12 s. The postoperative radiograph in-painting took 40 s of image processing with a non-optimized Matlab® implementation.
3.1 Vertebrae Location and Orientation
The first evaluation was the quantification of the 3D accuracy of the spine reconstruction on pre- and post-operative X-ray images. A referential frame Rv attached to each vertebra (Fig. 4) was used to compare the vertebra poses between the proposed automatic method and the expert’s reconstructions. Table 1 presents absolute mean (SD) errors of vertebrae XYZ locations and lateral, sagittal, axial orientations for pre- and post-operative radiographs. Since the metallic rods and screws occlude the spine structures and disturb the 3D/2D registration for postoperative radiographs, we calculated the errors with and without the in-painted step to quantify the gain in precision of the pose recovering as brought by in-painting. The reconstruction method used by the expert has a confidence interval (CI) for vertebral pose accuracy that was assessed by an inter-operator reproducibility study from pre-operative radiographs [2]. For pre-operative automatic reconstruction, the mean errors were similar to CI ranges. Without any occlusion from instrumentation, the translation errors were inferior to 1.5 mm. Nevertheless, the scoliotic deformations induce a vertebral axial rotation, and the axial orientation error was the largest (4.5°). In terms of mean residual 3D distance from postoperative images, the semi-automatic method of [4] attains 2.19 mm, and for the thoracic-lumbar instrumented level the automatic method of [6] attains ~3 mm and the proposed method attains 3.6 mm. Figure 4 details the location errors by vertebral levels from postoperative radiographs with and without applying the in-painting step. The presence of metallic structures can be viewed as artifacts inducing outlier image features. In our case, the area with metallic objects gave a poor CNN prediction for landmark locations. When radiographs were in-painted, the high intensities were replaced by a homogeneous gray level distribution. The automatic location of vertebrae became clearly more accurate for the vertebral levels generally instrumented (T4–L2) as presented in Fig. 4. Thus, the in-painting step enhanced the CNN-based landmark detection and the 3D/2D registration convergence had been improved. However, the thoracic region was still difficult due to higher density of organ soft tissues.
3.2 Clinical Parameters
The second evaluation was the computation of clinical parameters to quantify the surgery correction. The pre- and post-operative parameters, and the differences (post – pre) representing the surgical correction, were calculated from the reference reconstructions and the proposed automatic reconstructions. The absolute mean errors (SD) of automatic reconstructions versus the reference reconstructions were also calculated to assess the accuracy of the method. Table 2 reports the clinical parameters’ values of the kyphosis (K) T1/T2, the lordosis (L) L1/L5, the height between C7 and S1 (H C1–S1), the main Cobb angle, and the apical vertebral axial rotation (AVR). In this table, we see for example that the mean Cobb angle was −45° for pre-surgery and decreases towards −15.8° for post-surgery. The mean Cobb angle surgery correction (difference post-pre) was 28° from reference reconstructions and 27.3° from the automatic reconstructions which showed similar mean difference. Similarly, the mean surgical correction for the patient growth (height C7–S1) was 29.1 mm and 29.8 mm from manual and automatic reconstructions respectively. The semi-automatic method used to generate the reference had the clinical parameters CI values for both pre- [2] and post- [3] operative (Table 2). This method requires time-consuming user interventions (10–20 min depending of the scoliosis severity) when all vertebral shapes are fully adjusted. In our case, the automatic clinical measurements from pre-operative radiograph revealed that the mean absolute errors were all inferior to 5.6° for angular parameters.
The 3D/2D registration of the spine model was based on the vertebra center displacements, thus the other vertebra anatomical characteristics were estimated statistically but still provided a relevant set of clinical parameters. To our knowledge, this is the first time that a solution for automatic vertebrae 3D reconstruction from biplanar X-rays provided the vertebral endplate orientations to compute the fundamental Cobbs and the kyphosis/lordosis angles. The method also provided the 3D parameter of the vertebral axial rotation with a limited mean error of 4.6° and 3.8° for pre- and post-operative respectively. For surgeons, this parameter serves to quantify the neutral rotation restoration of the apical vertebra (the most rotated) between pre- and post-exams.
Since the proposed spine model incorporated a detailed 3D surface, we plan to use an additional automatic local deformation for vertebrae fine-tuning that will improve the angular parameters accuracy, which will also be possible for postoperative radiographs thank to the image in-painting.
4 Conclusions
This paper presented a method for automatic spine 3D reconstruction combining a statistical shape model of the spine and a set of CNNs for 3D/2D registration. A diffusion-based in-painting method served to remove the surgical instrumentation in order to feed the CNN with visible information containing fewer perturbations. The proposed approach provided a fast-automatic 3D quantification of the fundamental parameters needed for scoliosis surgery assessment, and should considerably speed-up the spine radiographs analysis at the pre- and post-operative times.
References
Ilharreborde, B., et al.: Use of EOS imaging for the assessment of scoliosis deformities: application to postoperative 3D quantitative analysis of the trunk. Eur. Spine J. 23(Suppl. 4), 97–405 (2014)
Humbert, L., et al.: 3D reconstruction of the spine from biplanar X-rays using parametric models based on transversal and longitudinal inferences. Med. Eng. Phys. 31, 681–687 (2009)
Ilharreborde, B., et al.: Angle measurement reproducibility using EOS three-dimensional reconstructions in adolescent idiopathic scoliosis treated by posterior instrumentation. Spine 36(20), E1306–E1313 (2011)
Lecron, F., et al.: Fast 3D spine reconstruction of postoperative patients using a multilevel statistical model. MICCAI 15, 446–453 (2012)
Aubert, B., et al.: Automatic spine and pelvis detection in frontal X-rays using deep neural networks for patch displacement learning. In: IEEE ISBI, June 2016, pp. 1426–1429 (2016)
Kadoury, S., et al.: Postoperative 3D spine reconstruction by navigating partitioning manifolds. Med. Phys. 43(3), 1045–1056 (2016)
Korez, R., et al.: Sparse and multi-object pose + shape modeling of the three-dimensional scoliotic spine. In: 2016 IEEE 13th ISBI, pp. 225–228 (2016)
Cresson, T., et al.: Surface reconstruction from planar X-ray images using moving least squares. In: Conference Proceedings IEEE EMBS, pp. 3967–3970 (2008)
Chen, C., Xie, W., et al.: Fully automatic X-ray image segmentation via joint estimation of image displacements. MICCAI 16, 227–234 (2013)
Guillemot, C., Le Meur, O.: Image inpainting [overview and advances]. IEEE Signal Process. Mag. 31, 127–144 (2014)
Vidal, P.A.: Image processing tools development to help the 3D reconstruction of the spine from postoperative radiographic images. Master thesis, Montréal, École de technologie supérieure (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Aubert, B., Vidal, P.A., Parent, S., Cresson, T., Vazquez, C., De Guise, J. (2017). Convolutional Neural Network and In-Painting Techniques for the Automatic Assessment of Scoliotic Spine Surgery from Biplanar Radiographs. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S. (eds) Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017. MICCAI 2017. Lecture Notes in Computer Science(), vol 10434. Springer, Cham. https://doi.org/10.1007/978-3-319-66185-8_78
Download citation
DOI: https://doi.org/10.1007/978-3-319-66185-8_78
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66184-1
Online ISBN: 978-3-319-66185-8
eBook Packages: Computer ScienceComputer Science (R0)