Lung cancer is the deadliest cancer worldwide. Early detection of lung cancer is a promising way to lower the risk of dying. Accurate pulmonary nodule detection in computed tomography (CT) images is crucial for early diagnosis of lung cancer. The development of computer-aided detection (CAD) system of pulmonary nodules contributes to making the CT analysis more accurate and with more efficiency. Recent studies from other groups have been focusing on lung cancer diagnosis CAD system by detecting medium to large nodules. However, to fully investigate the relevance between nodule features and cancer diagnosis, a CAD that is capable of detecting nodules with all sizes is needed. In this paper, we present a deep-learning based automatic all size pulmonary nodule detection system by cascading two artificial neural networks. We firstly use a U-net like 3D network to generate nodule candidates from CT images. Then, we use another 3D neural network to refine the locations of the nodule candidates generated from the previous subsystem. With the second sub-system, we bring the nodule candidates closer to the center of the ground truth nodule locations. We evaluate our system on a public CT dataset provided by the Lung Nodule Analysis (LUNA) 2016 grand challenge. The performance on the testing dataset shows that our system achieves 90% sensitivity with an average of 4 false positives per scan. This indicates that our system can be an aid for automatic nodule detection, which is beneficial for lung cancer diagnosis.
Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological
findings can be time consuming as well as tedious due to possible differences in orientation and position between
scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.
Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to
reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists
in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.
There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired
from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and
practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such
scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a
smaller set of images may not always work optimally on newer set of images having entirely different characteristics.
In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an
appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its
source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the
performance of various image processing and workflow applications like CAD to perform in a consistent manner. First,
our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image
sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus
giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on
large set of CT images acquired from various vendors including images reconstructed with next generation iterative
methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.
Characterization and quantification of the severity of diffuse parenchymal lung diseases (DPLDs) using Computed
Tomography (CT) is an important issue in clinical research. Recently, several classification-based computer-aided
diagnosis (CAD) systems [1-3] for DPLD have been proposed. For some of those systems, a degradation of performance
[2] was reported on unseen data because of considerable inter-patient variances of parenchymal tissue patterns.
We believe that a CAD system of real clinical value should be robust to inter-patient variances and be able to classify
unseen cases online more effectively. In this work, we have developed a novel adaptive knowledge-driven CT image
search engine that combines offline learning aspects of classification-based CAD systems with online learning aspects of
content-based image retrieval (CBIR) systems. Our system can seamlessly and adaptively fuse offline accumulated
knowledge with online feedback, leading to an improved online performance in detecting DPLD in both accuracy and
speed aspects. Our contribution lies in: (1) newly developed 3D texture-based and morphology-based features; (2) a
multi-class offline feature selection method; and, (3) a novel image search engine framework for detecting DPLD. Very
promising results have been obtained on a small test set.
Segmentation of blood vessels is a challenging problem due to poor contrast, noise, and specifics of vessels'
branching and bending geometry. This paper describes a robust semi-automatic approach to extract the surface
between two or more user-supplied end points for tubular- or vessel-like structures. We first use a minimal path
technique to extract the shortest path between the user-supplied points. This path is the global minimizer of
an active contour model's energy along all possible paths joining the end-points. Subsequently, the surface of
interest is extracted using an edge-based level set segmentation approach. To prevent leakage into adjacent
tissues, the algorithm uses a diameter constraint that does not allow the moving front to grow wider than the
predefined diameter. Points constituting the extracted path(s) are automatically used as initialization seeds for
the evolving level set function. To cope with any further leaks that may occur in the case of large variations of
the vessel width between the user-supplied end-points, a freezing mechanism is designed to prevent the moving
front to leak into undesired areas. The regions to be frozen are determined from few clicks by the user. The
potential of the proposed approach is demonstrated on several synthetic and real images.
Most methods for classifier design assume that the training samples
are drawn independently and identically from an unknown data
generating distribution (i.i.d.), although this assumption is violated in several real life problems. Relaxing this i.i.d. assumption, we
develop training algorithms for the more realistic situation where
batches or sub-groups of training samples may have internal
correlations, although the samples from different batches may be
considered to be uncorrelated; we also consider the extension to
cases with hierarchical--i.e. higher order--correlation structure
between batches of training samples. After describing efficient
algorithms that scale well to large datasets, we provide some
theoretical analysis to establish their validity. Experimental
results from real-life Computer Aided Detection (CAD) problems
indicate that relaxing the i.i.d. assumption leads to statistically
significant improvements in the accuracy of the learned classifier.
Colon cancer is a widespread disease and, according to the American Cancer Society, it is estimated that in 2006
more than 55,000 people will die of colon cancer in the US. However, early detection of colorectal polyps helps
to drastically reduces mortality. Computer-Aided Detection (CAD) of colorectal polyps is a tool that could help
physicians finding such lesions in CT scans of the colon.
In this paper, we present the first phase, candidate generation (CG), of our technique for the detection of
colonic polyp candidate locations in CT colonoscopy. Since polyps typically appear as protrusions on the surface
of the colon, our cutting-plane algorithm identifies all those areas that can be "cut-off" using a plane. The key
observation is that for any protruding lesion there is at least one plane that cuts a fragment off. Furthermore,
the intersection between the plane and the polyp will typically be small and circular. On the other hand, a
plane cannot cut a small circular cross-section from a wall or a fold, due to their concave or elongated paraboloid
morphology, because these structures yield cross-sections that are much larger or non-circular.
The algorithm has been incorporated as part of a prototype CAD system. An analysis on a test set of
more than 400 patients yielded a high per-patient sensitivity of 95% and 90% in clean and tagged preparation
respectively for polyps ranging from 6mm to 20mm in size.
The Computed Tomography (CT) modality shows not only the body of the patient in the volumes it generates, but also the clothing, the cushion and the table. This might be a problem especially for two applications. The first is 3D visualization, where the table has high density parts that might hide regions of interest. The second is registration of acquisitions obtained at different time points; indeed, the table and cushions might be visible in one data set only, and their positions and shapes may vary, making the registration less accurate. An automatic approach for extracting the body would solve those problems. It should be robust, reliable, and fast. We therefore propose a multi-scale method based on deformable models. The idea is to move a surface across the image that attaches to the boundaries of the body. We iteratively compute forces which take into account local information around the surface. Those make it move through the table but ensure that it stops when coming close to the body. Our model has elastic properties; moreover, we take into account the fact that some regions in the volume convey more information than others by giving them more weight. This is done by using normalized convolution when regularizing the surface. The algorithm*, tested on a database of over a hundred volumes of
whole body, chest or lower abdomen, has proven to be very efficient, even for volumes with up to 900 slices, providing accurate results in an average time of 6 seconds. It is also robust against noise and variations of scale and table's shape.
A novel method called local shape controlled voting has been developed for spherical object detection in 3D voxel
images. By combining local shape properties into the global tracking procedure of normal overlap, the proposed
method solved the ambiguities of normal overlap between a small size sphere and a possible large size cylinder,
as the normal overlap technique can only measures the 'density' of normal overlapping, while how the normal
vectors are distributed in 3D is not discovered. The proposed method was applied to computer aided detection
of small size pulmonary nodules based on helical CT images. Experiments showed that this method attained a
better performance compared to the original normal overlap technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.