Automatic Detection of Colorectal Polyps Using Transfer Learning
<p>Google Net network architecture [<a href="#B30-sensors-21-05704" class="html-bibr">30</a>].</p> "> Figure 2
<p>Example extracted from “CVC-ClinicDB” [<a href="#B38-sensors-21-05704" class="html-bibr">38</a>].</p> "> Figure 3
<p>Expansion rate impact on convolutional filters [<a href="#B41-sensors-21-05704" class="html-bibr">41</a>].</p> "> Figure 4
<p>ResNet18 model architecture [<a href="#B39-sensors-21-05704" class="html-bibr">39</a>].</p> "> Figure 5
<p>Pixels frequency corresponding to each class.</p> "> Figure 6
<p>(<b>a</b>) Before and (<b>b</b>) after augmentation.</p> "> Figure 7
<p>The use-case diagram.</p> "> Figure 8
<p>The main menu of the application.</p> "> Figure 9
<p>Image loading.</p> "> Figure 10
<p>Data preview.</p> "> Figure 11
<p>Polyp detection.</p> "> Figure 12
<p>Online colonoscopy.</p> "> Figure 13
<p>Polyp detected during colonoscopy.</p> "> Figure 14
<p>Comparison between different masks of the used dataset.</p> ">
Abstract
:Simple Summary
Abstract
1. Introduction
2. Related Works
3. Materials and Methods
- ❖
- Anatomical landmarks: are areas of the gastrointestinal tract easily visible through the endoscope. They are used to indicate how far the video camera has reached through the colonoscopy process.
- Z-line: marks the transition between the oesophagus and the stomach. It is of interest because it indicates whether or not the disease exists because this is the area where signs of gastroesophageal reflux may appear [30].
- Pylorus: is the area under the opening of the stomach to the duodenum. It has the muscles involved in transporting food from the stomach. Identification of the pilor is necessary for the detection of ulcers, erosions and stenosis [30].
- Cecum: is the most proximal part of the large intestine. The successful completion of the colonoscopy up to this point can be interpreted as an indicator of quality [30].
- ❖
- Pathological findings: in this context of endoscopies, pathological findings refers to abnormal entities that occur in the gastrointestinal tract. These may be signs of an ongoing disease or one that is about to begin. Their detection is extremely important and especially relevant for initiating the suitable treatment [30].
- Esophagitis: is an inflammation of the oesophagus visible as a rift of the oesophageal mucosa in relation to the Z-line. This is most often caused by gastric acid that returns back to the oesophagus. Clinically, early treatment is necessary for the improvement and prevention of possible complications [30].
- Polyps: are lesions of the intestine detectable as prominent areas of mucosa. They may be elevated, flat or pedunculated. Most polyps are harmless, however a small ratio of them can cause colorectal cancer. It is obvious then that their detection and elimination reduces substantially the risk of developing the disease. They are often missed by the human eye within colonoscopies, a gap that is intended to be covered by artificial intelligence [30].
- Ulcerative colitis: is a chronic inflammatory disease affecting the large intestine. It mainly affects the quality of life. Therefore, it is not a disease directly related to colorectal cancer, but investigations to detect cancer can also lead to this adjacent disease [30].
- ❖
- Polyps located in the large bowel: are the precursors of cancer so they must be removed as soon as possible. One such technique is mucous endoscopic resection. This involves injecting a liquid under the polyp, causing it to detach from the tissue under it, and then removal using surgical scissors.
- Coloured polyps: are those polyps on which the solution has been applied that causes them to detach from the tissue and at the same time colours them with a bluish hue. The edges are visible and the surrounding healthy tissue is distinguished. They are of interest because they reveal the success of the extirpation or any remaining malignant areas [30].
- Dyed resection margins: are those areas left over from the elimination of cancerous areas, which indicate whether the polyp has been completely removed or not. The remaining tissues may continue to develop and thus the disease returns [30].
3.1. Image Classification
3.2. Polyps Identification
3.3. The Developed Application
- A “Button” object to select and load image data from a storage device;
- An “Image” object to view uploaded images;
- A “Table” object to display information such as: image source, network prediction, and confidence percentage on the prediction made;
- A “Camera” object with which the user can connect to the available camera and perform an online endoscopy process;
- Two variables in which the networks are loaded at the time of application launch.
- The “ImportDataButtonPushed” function is used to load data. It waits until the user has selected one or more images, after which, depending on the number of data entered, prepares the data to be served to the algorithm that deals with tissue classification. Given that the classification operation requires many mathematical operations, the waiting time until the results are ready for display in the table is in the order of seconds. Therefore, an additional window is created that notifies the user that the program has not crashed, but has to wait for a few moments. As soon as the classification is completed, the data are transmitted to the function that handles the information, organising it in a table.
- The “displayResultsIntoTable” function receives as parameters the path to the file, the prediction and the score for each image, and then displays all this infor-mation in a table. In addition, it checks if there are already data in the table, with two options: initialising the table with the new data or concatenating the current ones to the existing ones.
- The “startColonoscopy” function is triggered when the user wants to initiate the colonoscopy process. This establishes the connection to the available camera, in case the connection does not exist or cannot be made for various reasons, a warning message will be displayed. If the connection is successful, a new window will open in which you can view the data received from the webcam and the prediction for that frame will be displayed at the top. At the same time, there will be a button that will allow the user to pause or continue the colonoscopy process. If there is a pause and the frame contains a polyp, then a button will appear that will allow the detection of the malignant area.
- Another important function is “startUpFcn”, which is called upon when launching the application. With this function the variables that will be used are initialised: the network for classification and segmentation, the size of the images. In this way the waiting time is reduced.
3.4. Application Facilities
4. Results
- (a)
- Sensitivity: represents the cases correctly classified as positive relative to the actual number of positive cases:
- (b)
- Specificity: represents the ratio of classified negative cases relative to truly negative cases:
- (c)
- Precision: means the ratio of those correctly classified as positive to the total number of positive cases:
- (d)
- Accuracy: means the ratio of those correctly classified as positive to the total number of positive cases:
- (e)
- F1 score: is considered a better benchmark than accuracy when the aim is to compare different models [12]. It is calculated as the harmonic mean of accuracy and sensitivity:
- (j)
- Jaccard: penalises false detections and it rewards true detection:
4.1. Results Obtained for the “Kvasir” Dataset
4.2. Results for AlexNet
4.3. Results for VGG16 and VGG19
4.4. Results for Inceptionv3
4.5. Results Obtained for Polyp Localisation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kuipers, E.J.; Grady, W.M.; Lieberman, D.; Seufferlein, T.; Sung, J.J.; Boelens, P.G.; Van De Velde, C.J.H.; Watanabe, T. Colorectal cancer. Nat. Rev. Dis. Primers 2015, 1, 15065. [Google Scholar] [CrossRef] [Green Version]
- Haggar, F.A.; Boushey, R.P. Colorectal Cancer Epidemiology: Incidence, Mortality, Survival, and Risk Factors. Clin. Colon Rectal Surg. 2009, 22, 191–197. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kainz, P.; Pfeiffer, M.; Urschler, M. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization. PeerJ 2017, 5, e3874. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Fleming, M.; Ravula, S.; Tatishchev, S.F.; Wang, H.L. Colorectal carcinoma: Pathologic aspects. J. Gastrointest. Oncol. 2012, 3, 153–173. [Google Scholar] [CrossRef] [PubMed]
- Yue, X.; Dimitriou, N.; Caie, P.; Harrison, D.; Arandjelovic, O. Colorectal Cancer Outcome Prediction from H&E Whole Slide Images using Machine Learning and Automatically Inferred Phenotype Profiles. EPiC Ser. Comput. 2019, 60, 139–149. [Google Scholar]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Eng. Med. Boil. Mag. 2001, 21, 34–41. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Urban, G.; Tripathi, P.; Alkayali, T.; Mittal, M.; Jalali, F.; Karnes, W.; Baldi, P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology 2018, 155, 1069–1078.e8. [Google Scholar] [CrossRef] [PubMed]
- Van Erven, T.; Harremos, P. Rényi Divergence and Kullback-Leibler Divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Tang, C.; Sun, J.; Yang, J.; Huang, C.; Phillips, P.; Zhang, Y.-D. Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network with Batch Normalization, Dropout, and Stochastic Pooling. Front. Neurosci. 2018, 12, 818. [Google Scholar] [CrossRef]
- Martin, M.; Sciolla, B.; Sdika, M.; Quetin, P.; Delachartre, P. Segmentation of neonates cerebral ventricles with 2D CNN in 3D US data: Suitable training-set size and data augmentation strategies. In Proceedings of the 2019 IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019; pp. 2122–2125. [Google Scholar]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Cardoso, M.J. Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Dahiru, T. p-value, a true test of statistical significance? A cautionary note. Ann. Ib. Postgrad. Med. 2008, 6, 21–26. [Google Scholar] [CrossRef] [Green Version]
- Russakovsky, O. ImageNet Large Scale Visual Recognition Challenge. arXiv 2015, arXiv:1409.0575. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
- Kang, H.-J. Real-Time Object Detection on 640×480 Image with VGG16+SSD. In Proceedings of the 2019 International Conference on Field-Programmable Technology (ICFPT), Tianjin, China, 9–13 December 2019. [Google Scholar]
- Xia, Y.; Cai, M.; Ni, C.; Wang, C.; Shiping, E.; Li, H. A Switch State Recognition Method based on Improved VGG19 network. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; Volume 1, pp. 1658–1662. [Google Scholar]
- Tian, X.; Chen, C. Modulation Pattern Recognition Based on Resnet50 Neural Network. In Proceedings of the 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP), Weihai, China, 28–30 September 2019. [Google Scholar]
- Chen, C.; Qi, F. Single Image Super-Resolution Using Deep CNN with Dense Skip Connections and Inception-ResNet. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018. [Google Scholar]
- Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Park, D.S. Colonoscopic polyp detection using convolutional neural networks. Proc. SPIE 2016, 9875, 978–984. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Proc. Neural Inf. Process. Syst. 2012, 60, 1097–1105. [Google Scholar] [CrossRef]
- Shin, Y.; Qadir, H.A.; Aabakken, L.; Bergsland, J.; Balasingham, I. Automatic Colon Polyp Detection Using Region Based Deep CNN and Post Learning Approaches. IEEE Access 2018, 6, 40950–40962. [Google Scholar] [CrossRef]
- Pesapane, F.; Rotili, A.; Penco, S.; Montesano, M.; Agazzi, G.; Dominelli, V.; Trentin, C.; Pizzamiglio, M.; Cassano, E. Inter-Reader Agreement of Diffusion-Weighted Magnetic Resonance Imaging for Breast Cancer Detection: A Multi-Reader Retrospective Study. Cancers 2021, 13, 1978. [Google Scholar] [CrossRef]
- Usuda, K.; Ishikawa, M.; Iwai, S.; Iijima, Y.; Motono, N.; Matoba, M.; Doai, M.; Hirata, K.; Uramoto, H. Combination Assessment of Diffusion-Weighted Imaging and T2-Weighted Imaging Is Acceptable for the Differential Diagnosis of Lung Cancer from Benign Pulmonary Nodules and Masses. Cancers 2021, 13, 1551. [Google Scholar] [CrossRef]
- Debelee, T.G.; Kebede, S.R.; Schwenker, F.; Shewarega, Z.M. Deep Learning in Selected Cancers’ Image Analysis—A Survey. J. Imaging 2020, 6, 121. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common objects in context. In Computer Vision–ECCV 2014. ECCV 2014. Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; Volume 8693. [Google Scholar]
- Bernal, J.; Sánchez, J.; Vilariño, F. Towards Automatic Polyp Detection with a Polyp Appearance Model. Pattern Recognit. 2012, 45, 3166–3182. [Google Scholar] [CrossRef]
- Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.T.; Lux, M.; Schmidt, P.T.; et al. Kvasir: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
- Vrejoiu, M.H. Reţele neuronale convoluţionale, Big Data şi Deep Learning în analiza automată de imagini. Rev. Română Inform. Autom. 2019, 29, 91–114. [Google Scholar] [CrossRef]
- Baratloo, A.; Hosseini, M.; Negida, A.; El Ashal, G. Part 1: Simple Definition and Calculation of Accuracy, Sensitivity and Specificity. Arch. Acad. Emerg. Med. (Emerg.) 2015, 3, 48–49. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; Torralba, A. Places: A 10 Million Image Database for Scene Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1452–1464. [Google Scholar] [CrossRef] [Green Version]
- Stacey, R. Deep Learning: Which Loss and Activation Functions Should I Use? Available online: https://towardsdatascience.com/deep-learning-which-loss-and-activation-functions-should-i-use-ac02f1c56aa8 (accessed on 29 February 2020).
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. 2010, 9, 249–256. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, M.G.; Gil, D.; Rodríguez, C.; Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. 2015, 43, 99–111. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inceptionv4, inception-ResNet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), Mountain View, CA, USA, 4–9 February 2017. [Google Scholar]
- Cui, X.; Zheng, K.; Gao, L.; Zhang, B.; Yang, D.; Ren, J. Multiscale Spatial-Spectral Convolutional Network with Image-Based Framework for Hyperspectral Imagery Classification. Remote Sens. 2019, 19, 11. [Google Scholar] [CrossRef] [Green Version]
- Chen, C.; Zhou, K.; Zha, M.; Qu, X.; Guo, X.; Chen, H.; Wang, Z.; Xiao, R. An Effective Deep Neural Network for Lung Lesions Segmentation from COVID-19 CT Images. IEEE Trans. Ind. Inform. 2021, 17, 6528–6538. [Google Scholar] [CrossRef]
- Huang, Y.-J.; Dou, Q.; Wang, Z.-X.; Liu, L.-Z.; Jin, Y.; Li, C.-F.; Wang, L.; Chen, H.; Xu, R.-H. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE Trans. Cybern. 2020. Early Access. [Google Scholar] [CrossRef] [Green Version]
- Tulbure, A.A.; Tulbure, A.A.; Dulf, E.H. A review on modern defect detection models using DCNNs–Deep convolutional neural networks. J. Adv. Res. 2021, in press. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Johansen, D.; de Lange, T.; Johansen, H.; Halvorsen, P.; Riegler, M. A Comprehensive Study on Colorectal Polyp Segmentation with ResUNet++, Conditional Random Field and Test-Time Augmentation. IEEE J. Biomed. Health Inform. 2021, 25, 2029–2040. [Google Scholar] [CrossRef] [PubMed]
- Lorenzovici, N.; Dulf, E.-H.; Mocan, T.; Mocan, L. Artificial Intelligence in Colorectal Cancer Diagnosis Using Clinical Data: Non-Invasive Approach. Diagnostics 2021, 11, 514. [Google Scholar] [CrossRef] [PubMed]
Parameter Name | Value |
---|---|
Batch size | 10 |
Number of epoch | 10 |
Learning rate | 3 × 10−4 |
Optimisation method | Stochastic Gradient Descent with Momentum |
Momentum | 0.9 |
Parameter Name | Value |
---|---|
Learn rate drop period | 10 |
Learn rate drop factor | 0.3 |
Initial learn rate | 10−3 |
Momentum | 0.9 |
Validation frequency | 50 |
Number of maximum epochs | 40 |
Batch size | 2 |
Parameter Name | Value |
---|---|
Pixel range | [−30, 30] |
Scale range | [0.7, 1.5] |
RandXTranslation | Pixel range |
RandYTranslation | Pixel range |
RandXshear | [−10, 10] |
RandYshear | [−10, 10] |
RandXScale | Scale range |
RandYScale | Scale range |
Architecture | Accuracy [%] | Sensibility [%] | Specificity [%] | Precision [%] | F1 [%] |
---|---|---|---|---|---|
GoogleNet-ImageNet | 99.38 | 97.53 | 99.65 | 97.55 | 97.54 |
GoogleNet-Places365 | 98.83 | 95.3 | 99.33 | 95.35 | 95.32 |
Uninitialised GoogleNet | 96.71 | 86.85 | 98.12 | 87.01 | 86.93 |
Architecture | Accuracy [%] | Sensibility [%] | Specificity [%] | Precision [%] | F1 [%] |
---|---|---|---|---|---|
GoogleNet-ImageNet | 98.44 | 93.77 | 99.11 | 94.04 | 93.91 |
GoogleNet-Places365 | 98.25 | 93.00 | 99.00 | 93.30 | 93.21 |
Uninitialised GoogleNet | 95.26 | 81.03 | 97.29 | 82.05 | 81.53 |
Architecture | Augmentation | Accuracy [%] | Sensibility [%] | Specificity [%] | Precision [%] | F1 [%] |
---|---|---|---|---|---|---|
Pre-initialised AlexNet | - | 98.98 | 94.70 | 99.24 | 94.78 | 94.74 |
Pre-initialised AlexNet | Yes | 97.77 | 91.08 | 98.73 | 91.41 | 91.24 |
Uninitialised AlexNet | - | 95.59 | 83.67 | 97.69 | 83.85 | 83.76 |
Uninitialised AlexNet | Yes | 93.58 | 74.32 | 96.33 | 75.39 | 74.86 |
Architecture | Augmentation | Accuracy [%] | Sensibility [%] | Specificity [%] | Precision [%] | F1 [%] |
---|---|---|---|---|---|---|
Pre-initialised VGG16 | - | 95.03 | 95.01 | 99.09 | 94.09 | 94.56 |
Pre-initialised VGG16 | Yes | 98.41 | 93.65 | 99.09 | 93.95 | 93.80 |
Uninitialised VGG16 | - | 93.53 | 74.12 | 96.30 | 76.24 | 75.17 |
Uninitialised VGG16 | Yes | 91.33 | 65.30 | 96.30 | 76.24 | 66.37 |
Pre-initialised VGG19 | - | 99.10 | 96.40 | 99.49 | 96.54 | 96.47 |
Pre-initialised VGG19 | Yes | 98.02 | 92.08 | 98.87 | 93.34 | 92.70 |
Uninitialised VGG19 | - | 93.34 | 73.35 | 96.19 | 73.24 | 73.33 |
Uninitialised VGG19 | Yes | 91.36 | 67.66 | 95.06 | 67.66 | 66.53 |
Architecture | Augmentation | Accuracy [%] | Sensibility [%] | Specificity [%] | Precision [%] | F1 [%] |
---|---|---|---|---|---|---|
Pre-initialised Inceptionv3 | - | 99.53 | 98.13 | 99.73 | 98.15 | 98.14 |
Pre-initialised Inceptionv3 | Yes | 99.41 | 96.67 | 99.66 | 97.67 | 97.66 |
Uninitialised Inceptionv3 | - | 93.98 | 75.92 | 96.56 | 76.07 | 76.00 |
Uninitialised Inceptionv3 | Yes | 92.80 | 71.45 | 95.92 | 73.10 | 72.27 |
Augmentation | Accuracy [%] | Jaccard [%] | F1 [%] | |||
---|---|---|---|---|---|---|
Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | |
No | 97.1486.59 | 91.86 | 95.9367.2 | 81.56 | 85.3733.27 | 59.32 |
Yes | 96.2584.11 | 90.18 | 94.7960.66 | 77.72 | 84.7828.94 | 56.86 |
Augmentation | Accuracy [%] | Jaccard [%] | F1 [%] | |||
---|---|---|---|---|---|---|
Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | |
No | 89.24 72.70 | 80.97 | 84.70 47.08 | 65.89 | 71.76 20.33 | 46.04 |
Yes | 75.81 90.94 | 83.37 | 74.44 40.82 | 57.63 | 67.94 19.34 | 43.64 |
Augmentation | Accuracy [%] | Jaccard [%] | F1 [%] | |||
---|---|---|---|---|---|---|
Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | Per Polyp Tissue Class | Average | |
No | 89.76 92.81 | 91.28 | 88.5 61.07 | 74.78 | 78.530.7 | 54.6 |
Yes | 89.65 92.95 | 91.3 | 88.42 61.94 | 75.18 | 78.35 29.68 | 54.01 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dulf, E.-H.; Bledea, M.; Mocan, T.; Mocan, L. Automatic Detection of Colorectal Polyps Using Transfer Learning. Sensors 2021, 21, 5704. https://doi.org/10.3390/s21175704
Dulf E-H, Bledea M, Mocan T, Mocan L. Automatic Detection of Colorectal Polyps Using Transfer Learning. Sensors. 2021; 21(17):5704. https://doi.org/10.3390/s21175704
Chicago/Turabian StyleDulf, Eva-H., Marius Bledea, Teodora Mocan, and Lucian Mocan. 2021. "Automatic Detection of Colorectal Polyps Using Transfer Learning" Sensors 21, no. 17: 5704. https://doi.org/10.3390/s21175704
APA StyleDulf, E. -H., Bledea, M., Mocan, T., & Mocan, L. (2021). Automatic Detection of Colorectal Polyps Using Transfer Learning. Sensors, 21(17), 5704. https://doi.org/10.3390/s21175704