[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Universal Lesion Detection and Classification Using Limited Data and Weakly-Supervised Self-training

  • Conference paper
  • First Online:
Medical Image Learning with Limited and Noisy Data (MILLanD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13559))

Included in the following conference series:

Abstract

Radiologists identify, measure, and classify clinically significant lesions routinely for cancer staging and tumor burden assessment. As these tasks are repetitive and cumbersome, only the largest lesion is identified leaving others of potential importance unmentioned. Automated deep learning-based methods for lesion detection have been proposed in literature to help relieve their tasks with the publicly available DeepLesion dataset (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8 body part labels). However, this dataset contains missing lesions, and displays a severe class imbalance in the labels. In our work, we use a subset of the DeepLesion dataset (boxes + tags) to train a state-of-the-art VFNet model to detect and classify suspicious lesions in CT volumes. Next, we predict on a larger data subset (containing only bounding boxes) and identify new lesion candidates for a weakly-supervised self-training scheme. The self-training is done across multiple rounds to improve the model’s robustness against noise. Two experiments were conducted with static and variable thresholds during self-training, and we show that sensitivity improves from 72.5% without self-training to 76.4% with self-training. We also provide a structured reporting guideline through a “Lesions” sub-section for entry into the “Findings” section of a radiology report. To our knowledge, we are the first to propose a weakly-supervised self-training approach for joint lesion detection and tagging in order to mine for under-represented lesion classes in the DeepLesion dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Eisenhauer, E., et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45(2), 228–247 (2009)

    Article  Google Scholar 

  2. van Persijn van Meerten, E.L., et al.: RECIST revised: implications for the radiologist. A review article on the modified RECIST guideline. Eur. Radiol. 20, 1456–1467 (2010)

    Google Scholar 

  3. Yang, J., et al.: AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12264, pp. 562–572. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59719-1_55

    Chapter  Google Scholar 

  4. Yang, J., He, Y., Kuang, K., Lin, Z., Pfister, H., Ni, B.: Asymmetric 3D context fusion for universal lesion detection. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12905, pp. 571–580. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87240-3_55

    Chapter  Google Scholar 

  5. Han, L., et al.: SATr: Slice Attention with Transformer for Universal Lesion Detection. arXiv (2022)

    Google Scholar 

  6. Yan, K., et al.: Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in CT. IEEE TMI 40(10), 2759–2770 (2021)

    Google Scholar 

  7. Cai, J., et al.: Lesion harvester: iteratively mining unlabeled lesions and hard-negative examples at scale. IEEE TMI 40(1), 59–70 (2021)

    Google Scholar 

  8. Yan, K., et al.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 036501 (2018)

    Article  Google Scholar 

  9. Yan, K., et al.: Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology. In: IEEE CVPR (2019)

    Google Scholar 

  10. Yan, K., et al.: MULAN: multitask universal lesion analysis network for joint lesion detection, tagging, and segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 194–202. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_22

    Chapter  Google Scholar 

  11. Hering, A., et al.: Whole-body soft-tissue lesion tracking and segmentation in longitudinal CT imaging studies. In: PMLR, pp. 312–326 (2021)

    Google Scholar 

  12. Cai, J., et al.: Deep lesion tracker: monitoring lesions in 4D longitudinal imaging studies. In: IEEE CVPR, pp. 15159–15169 (2021)

    Google Scholar 

  13. Tang, W., et al.: Transformer Lesion Tracker. arXiv (2022)

    Google Scholar 

  14. Zhang, H., et al.: VarifocalNet: an IoU-aware dense object detector. In: IEEE CVPR, pp. 8514–8523 (2021)

    Google Scholar 

  15. Yan, K., et al.: Unsupervised body part regression via spatially self-ordering convolutional neural networks. In: IEEE ISBI, pp. 1022–1025 (2018)

    Google Scholar 

  16. Tian, Z., et al.: FCOS: fully convolutional one-stage object detection. In: IEEE ICCV, pp. 9627–9636 (2019)

    Google Scholar 

  17. Zhang, S, et al.: Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: IEEE CVPR (2020)

    Google Scholar 

  18. Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE PAMI 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  19. Solovyev, R., et al.: Weighted boxes fusion: ensembling boxes from different object detection models. Image Vis. Comput. 107, 104117 (2021)

    Article  Google Scholar 

  20. Mattikalli, T., et al.: Universal lesion detection in CT scans using neural network ensembles. In: SPIE Medical Imaging: Computer-Aided Diagnosis, vol. 12033 (2022)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Intramural Research Program of the National Institutes of Health (NIH) Clinical Center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tejas Sudharshan Mathai .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 551 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Naga, V., Mathai, T.S., Paul, A., Summers, R.M. (2022). Universal Lesion Detection and Classification Using Limited Data and Weakly-Supervised Self-training. In: Zamzmi, G., Antani, S., Bagci, U., Linguraru, M.G., Rajaraman, S., Xue, Z. (eds) Medical Image Learning with Limited and Noisy Data. MILLanD 2022. Lecture Notes in Computer Science, vol 13559. Springer, Cham. https://doi.org/10.1007/978-3-031-16760-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16760-7_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16759-1

  • Online ISBN: 978-3-031-16760-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics