[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3594315.3594659acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccaiConference Proceedingsconference-collections
research-article

Deep Active Learning for Computer-Aided Detection of Nasopharyngeal Carcinoma in MRI Images

Published: 02 August 2023 Publication History

Abstract

Early detection and treatment of nasopharyngeal carcinoma has an important impact on improving the survival rate of patients. Computer-aided detection based on deep learning methods can automatically detect the presence of nasopharyngeal carcinoma on patient magnetic resonance images (MRI), assisting in the assessment of tumor progression. However, large-scale annotation of MRI images is not feasible because it is time-consuming and burdens the healthcare system. This paper proposes a weakly supervised nasopharyngeal carcinoma detection method suitable for MRI images, which can obtain better detection performance with a small amount of labeled data. We first generate a pseudo-color version of the MRI image based on a multi-window sampling method, which preserves richer information and improves the information utilization of the image. Then, active learning and deep learning are combined to construct an active detection model for nasopharyngeal cancer, and the most representative image set is selected from a large-scale unlabeled set by using instance-level image uncertainty for further annotation by experts, which significantly reduces the demand for image annotation in deep network. The proposed method is verified on the MRI image set of 800 patients with nasopharyngeal carcinoma. The experimental results show that the resampling method based on multi-window settings can improve the performance of the classical depth detection model by 1.5 %, while the active detection model of nasopharyngeal carcinoma only uses 20 % labeled data to achieve 92.6 % of the performance of the deep learning detector trained with all samples. Good performance is obtained when the label set is small. Our active detection method for nasopharyngeal carcinoma can detect the lesion area of nasopharyngeal carcinoma with high accuracy without large-scale labeled data, which significantly reduces the sample labeling burden of doctors.

References

[1]
Sung, H., Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 2021. 71(3): p. 209-249.
[2]
Tao, G., SeqSeg: A sequential method to achieve nasopharyngeal carcinoma segmentation free from background dominance. Medical Image Analysis, 2022. 78: p. 102381.
[3]
Yoshua Bengio, I.G., Aaron Courville, Deep Learning, ed. First. 2015: MIT Press.
[4]
Ono K, Iwamoto Y, Chen Y W, Automatic segmentation of infant brain ventricles with hydrocephalus in MRI based on 2.5 D U-net and transfer learning[J]. Journal of Image and Graphics, 2020, 8(2): 42-46.
[5]
Ikeda Y, Doman K, Mekada Y, Lesion image generation using conditional GAN for metastatic liver cancer detection[J]. Journal of Image and Graphics, 2021, 9(1): 27-30.
[6]
Budd, S., E.C. Robinson, and B. Kainz, A survey on active learning and human-in-the-loop deep learning for medical image analysis. Medical Image Analysis, 2021. 71: p. 102062.
[7]
Ren, P., A survey of deep active learning. ACM Computing Surveys (CSUR), 2021. 54(9): p. 1-40.
[8]
Dutt Jain, S. and K. Grauman. Active image segmentation propagation. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
[9]
Konyushkova, K., R. Sznitman, and P. Fua, Learning active learning from data. Advances in neural information processing systems, 2017. 30.
[10]
Tong, S., Active learning: theory and applications. 2001: Stanford University.
[11]
Liu, Q., Deep Active Learning for Text Classification with Diverse Interpretations. in Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021.
[12]
Guélorget, P., B. Grilheres, and T. Zaharia. Deep active learning with simulated rationales for text classification. in International Conference on Pattern Recognition and Artificial Intelligence. 2020. Springer.
[13]
Wang, K., Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, 2016. 27(12): p. 2591-2600.
[14]
Beluch, W.H., The power of ensembles for active learning in image classification. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[15]
Roy, S., A. Unmesh, and V.P. Namboodiri. Deep active learning for object detection. in BMVC. 2018.
[16]
Norouzzadeh, M.S., A deep active learning system for species identification and counting in camera trap images. Methods in Ecology and Evolution, 2021. 12(1): p. 150-161.
[17]
Lin, J., Active-learning-incorporated deep transfer learning for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018. 11(11): p. 4048-4062.
[18]
Yoo, D. and I.S. Kweon. Learning loss for active learning. in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[19]
Yang, L., Suggestive annotation: A deep active learning framework for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2017, Springer: Quebec City. p. 399-407.
[20]
Wang, W., Deep Active Self-paced Learning for Accurate Pulmonary Nodule Segmentation. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2018. Springer.
[21]
Wu, X., COVID-AL: The diagnosis of COVID-19 with deep active learning. Medical Image Analysis, 2021. 68: p. 101913.
[22]
Zemmal, N., Particle swarm optimization based swarm intelligence for active learning improvement: Application on medical data classification. Cognitive Computation, 2020. 12(5): p. 991-1010.
[23]
Wang, J., Deep Reinforcement Active Learning for Medical Image Classification. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020. Springer.
[24]
Melendez J, van Ginneken B, Maduskar P, On combining multiple-instance learning and active learning for computer-aided detection of tuberculosis[J]. Ieee transactions on medical imaging, 2015, 35(4): 1013-1024.
[25]
Smailagic A, Costa P, Noh H Y, Medal: Accurate and robust deep active learning for medical image analysis[C]//2018 17th IEEE international conference on machine learning and applications (ICMLA). IEEE, 2018: 481-488.
[26]
Kuo, W., Cost-Sensitive Active Learning for Intracranial Hemorrhage Detection. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2018. Springer.
[27]
Han G, Liu X, Zhang H, Hybrid resampling and multi-feature fusion for automatic recognition of cavity imaging sign in lung CT[J]. Future Generation Computer Systems, 2019, 99: 558-570.
[28]
Yuan, T., Multiple instance active learning for object detection. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[29]
Lin, T.-Y., Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. 42(2): p. 318-327.

Index Terms

  1. Deep Active Learning for Computer-Aided Detection of Nasopharyngeal Carcinoma in MRI Images

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCAI '23: Proceedings of the 2023 9th International Conference on Computing and Artificial Intelligence
    March 2023
    824 pages
    ISBN:9781450399029
    DOI:10.1145/3594315
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 August 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICCAI 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 37
      Total Downloads
    • Downloads (Last 12 months)20
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 04 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media