Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation
<p>Illustration of the LINDA system architecture. The platform is accessed from computers, notebooks, and mobile devices. The IoT system (<b>1</b>) features a web access interface to select two options: training or prediction. In the training stage (<b>2</b>), the user must provide the image exams and the classes. Then, at (<b>2.a</b>), deep extraction of attributes is performed, generating a new one-dimensional vector data set. In (<b>2.b</b>), the training of classifiers is done, combining each extractor and classifier. In (<b>2.c</b>), the trained models perform a classification step in the test set previously separated. In (<b>3</b>), the results of experiments and its charts are made available, and the user can select the best combination. In (<b>4</b>), the model is made available through a cloud service that can be accessed from any device with Internet access. In the prediction stage (<b>5</b>), the user can access an API to perform a pre-diagnosis or, in the case of this work, classify the absence or presence of lung in the image. This figure represents STEP 1—IOT APPLICATION of Figure 3.</p> "> Figure 2
<p>The ilustration of the deep features generation process on the CT images of the lung, in which the initial layers undergo a pooling process and only then form the last convolutional layer, this process makes the method less sensitive to the location of resources since the pooling method reduces sampling [<a href="#B67-sensors-20-06711" class="html-bibr">67</a>].</p> "> Figure 3
<p>The API uses different CNNs and classifiers in an IoT-based architecture for the classification of lung images according to their presence or not in the other slices of the exam. The API uses a graphical interface for more significant interaction with the user, in which the user inserts the image bank and the classes to be classified. This interface then communicates with the server, which contains the algorithm (endpoint) that will apply the different CNN models used. Then, through Transfer Learning, it sends the attribute vectors to the classifiers used. This event co-occurs; each CNN model receives the same images and sends them to their respective classifiers. This process co-occurs, as there is a need to find the best combination of the CNN model for extraction and its respective classifier.</p> "> Figure 4
<p>This Infogram represents the second phase of the proposed method. The image (INPUT), the result of the process of the first STEP 1 phase, is the input to the Mask R-CNN network. In (<b>STEP 2</b>), the model based on deep learning creates a map of the proposed region belonging to the lung represented by the image (<b>STEP 3</b>–Mask R-CNN OUTPUT). And finally, Transfer Learning process using Parzen’s Window method in (<b>STEP 4</b>) finishing with the segmentation of the lung region.</p> "> Figure 5
<p>The Figure above shows the segmentation of the lung using our Composite Method, based on deep learning combined with the Parzen window method through Transfer Learning.</p> "> Figure 6
<p>It presents the model’s segmentation differently; inline 1, the lung’s CT images are shown without segmentation methods. Inline 2, we offer the images already segmented by the proposed method. Inline 3, the images were enlarged for better visualization of the results. Inline four, the segmentation served as a basis for lung extraction for a better visual analysis of our model’s final result.</p> "> Figure 7
<p>Illustration of the metrics obtained with the result of the Mask R-CNN against the output generated by the proposed Method.</p> "> Figure 8
<p>The Figure above shows the lung segmentation in 3D using our Proposed Method, based on deep learning in conjunction with the Parzen window method through transfer learning.</p> "> Figure 9
<p>Illustration between metric values obtained, comparing the Proposed Method to other methods in the literature based on transfer learning presented by Table 7, with the combined models; Mask + Bayes, Mask + Kmeans, Mask + EM and Mask + SVM from the study of [<a href="#B13-sensors-20-06711" class="html-bibr">13</a>].</p> "> Figure 10
<p>Illustration between metric values obtained compares the proposed method to other methods in the literature presented in <a href="#sensors-20-06711-t004" class="html-table">Table 4</a>.</p> "> Figure 11
<p>This Figure shows the segmentation time between models in the literature, compared to the method proposed by this study, the visual data refer to the <a href="#sensors-20-06711-t005" class="html-table">Table 5</a>.</p> ">
Abstract
:1. Introduction
- The application of computer vision tools based on IoT in medical images.
- The use of deep extractors for the classification of pulmonary CT images.
- CT segmentation of lung images based on deep learning techniques.
- The use of Deep Learning with a fine-tuning technique based on Mask R-CNN and Parzen-window.
2. Related Works
3. Background
3.1. LINDA
3.2. Deep Learning Extractors
3.3. Classifiers
3.4. Deep Learning
3.5. Parzen-Window
3.6. Metrics
4. Methodology
4.1. Data-Set
4.2. Methodology
4.2.1. First Phase—Classification
4.2.2. Second Phase—Segmentation
5. Results and Discussion
5.1. First Stage of the Experiment
5.2. Second Stage of the Experiment
5.2.1. Result Second Stage—(Step 1: Pulmonary Segmentation Using the Proposed Method)
5.2.2. Result Second Stage—(Step 2: Comparison with Literature Methods)
6. Conclusions and Future Works
Author Contributions
Funding
Conflicts of Interest
References
- Buzug, T.M. Computed tomography. In Springer Handbook of Medical Technology; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–342. [Google Scholar]
- Gualtieri, P.; Falcone, C.; Romano, L.; Macheda, S.; Correale, P.; Arciello, P.; Polimeni, N.; Lorenzo, A.D. Body composition findings by computed tomography in SARS-CoV-2 patients: Increased risk of muscle wasting in obesity. Int. J. Mol. Sci. 2020, 21, 4670. [Google Scholar] [CrossRef] [PubMed]
- Tsoukalas, D.; Sarandi, E.; Thanasoula, M.; Docea, A.O.; Tsilimidos, G.; Calina, D.; Tsatsakis, A. Metabolic Fingerprint of Chronic Obstructive Lung Diseases: A New Diagnostic Perspective. Metabolites 2019, 9, 290. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Zheng, J.; Ahmed, R.; Huang, G.; Reid, J.; Mandal, R.; Maksymuik, A.; Sitar, D.S.; Tappia, P.S.; Ramjiawan, B.; et al. A high-performing plasma metabolite panel for early-stage lung cancer detection. Cancers 2020, 12, 622. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Burney, P.G.; Patel, J.; Newson, R.; Minelli, C.; Naghavi, M. Global and regional trends in COPD mortality, 1990–2010. Eur. Respir. J. 2015, 45, 1239–1247. [Google Scholar] [CrossRef] [PubMed]
- World Health Organization. Global Tuberculosis Report 2018; WHO: Geneva, Switzerland, 2019. [Google Scholar]
- Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [Green Version]
- Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Neural Networks for Lung Cancer Detection through Radiomic Features. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–10. [Google Scholar] [CrossRef]
- Ryu, J.H.; Daniels, C.E.; Hartman, T.E.; Eunhee, S.Y. Diagnosis of interstitial lung diseases. In Mayo Clinic Proceedings; Elsevier: Rochester, MN, USA, 2007; Volume 82, pp. 976–986. [Google Scholar]
- Vinod, S.K.; Jameson, M.G.; Min, M.; Holloway, L.C. Uncertainties in volume delineation in radiation oncology: A systematic review and recommendations for future studies. Radiother. Oncol. 2016, 121, 169–179. [Google Scholar] [CrossRef] [Green Version]
- Bechtold, R.E.; Chen, M.Y.; Ott, D.J.; Zagoria, R.J.; Scharling, E.S.; Wolfman, N.T.; Vining, D.J. Interpretation of abdominal CT: Analysis of errors and their causes. J. Comput. Assist. Tomogr. 1997, 21, 681–685. [Google Scholar] [CrossRef]
- Man, M.A.; Dantes, E.; Domokos Hancu, B.; Bondor, C.I.; Ruscovan, A.; Parau, A.; Motoc, N.S.; Marc, M. Correlation between transthoracic lung ultrasound score and HRCT features in patients with interstitial lung diseases. J. Clin. Med. 2019, 8, 1199. [Google Scholar] [CrossRef] [Green Version]
- Hu, Q.; Souza, L.F.d.F.; Holanda, G.B.; Alves, S.S.; Silva, F.H.d.S.; Han, T.; Reboucas Filho, P.P. An effective approach for CT lung segmentation using mask region-based convolutional neural networks. Artif. Intell. Med. 2020, 103, 101792. [Google Scholar] [CrossRef]
- Peña, D.M.; Luo, S.; Abdelgader, A. Auto diagnostics of lung nodules using minimal characteristics extraction technique. Diagnostics 2016, 6, 13. [Google Scholar] [CrossRef] [Green Version]
- Bajwa, M.N.; Muta, K.; Malik, M.I.; Siddiqui, S.A.; Braun, S.A.; Homey, B.; Dengel, A.; Ahmed, S. Computer-Aided Diagnosis of Skin Diseases Using Deep Neural Networks. Appl. Sci. 2020, 10, 2488. [Google Scholar] [CrossRef] [Green Version]
- Ullah, M.; Bari, M.; Ahmed, A.; Naveed, S. Lungs Cancer Detection Using Digital Image Processing Techniques: A Review. Mehran Univ. Res. J. Eng. Technol. 2019, 38, 351–360. [Google Scholar] [CrossRef] [Green Version]
- Skourt, B.A.; El Hassani, A.; Majda, A. Lung CT Image Segmentation using deep neural networks. Procedia Comput. Sci. 2018, 127, 109–113. [Google Scholar] [CrossRef]
- Pehrson, L.M.; Nielsen, M.B.; Ammitzbøl Lauridsen, C. Automatic pulmonary nodule detection applying deep learning or machine learning algorithms to the LIDC-IDRI database: A systematic review. Diagnostics 2019, 9, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shojaii, R.; Alirezaie, J.; Babyn, P. Automatic lung segmentation in CT images using watershed transform. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; Volume 2, p. 1270. [Google Scholar]
- Brown, R.H.; Herold, C.J.; Hirshman, C.A. In Vivo Measurements of Airway Reactivity Using High-Resolution Computed Tomography1, 2. Am. Rev. Respir. Dis. 1991, 144, 208–212. [Google Scholar] [CrossRef] [PubMed]
- Hedlund, L.; Anderson, R.; Goulding, P.; Beck, J.; Effmann, E.; Putman, C. Two methods for isolating the lung area of a CT scan for density information. Radiology 1982, 144, 353–357. [Google Scholar] [CrossRef]
- Hoffman, E.A.; Behrenbeck, T.; Chevalier, P.A.; Wood, E.H. Estimation of regional pleural surface expansile forces in intact dogs. J. Appl. Physiol. 1983, 55, 935–948. [Google Scholar] [CrossRef]
- Fernandes, K.; Cruz, R.; Cardoso, J.S. Deep image segmentation by quality inference. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using convolutional neural networks. In Proceedings of the 2016 international joint conference on neural networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2560–2567. [Google Scholar]
- Vincent, P.; Bengio, Y. Manifold parzen windows. In Advances in Neural Information Processing Systems; Université de Montréal: Montreal, QC, Canada, 2003; pp. 849–856. [Google Scholar]
- Wang, G.; Li, W.; Zuluaga, M.A.; Pratt, R.; Patel, P.A.; Aertsen, M.; Doel, T.; David, A.L.; Deprest, J.; Ourselin, S.; et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans. Med. Imaging 2018, 37, 1562–1573. [Google Scholar] [CrossRef]
- Lin, C.J.; Jeng, S.Y.; Chen, M.K. Using 2D CNN with Taguchi Parametric Optimization for Lung Cancer Recognition from CT Images. Appl. Sci. 2020, 10, 2591. [Google Scholar] [CrossRef]
- Manna, A.; Bhattacharyya, B. Taguchi and Gauss elimination method: A dual response approach for parametric optimization of CNC wire cut EDM of PRAlSiCMMC. Int. J. Adv. Manuf. Technol. 2006, 28, 67–75. [Google Scholar] [CrossRef]
- Shin, M.; Paik, W.; Kim, B.; Hwang, S. An IoT platform with monitoring robot applying CNN-based context-aware learning. Sensors 2019, 19, 2525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Santamaria, A.F.; Raimondo, P.; Tropea, M.; De Rango, F.; Aiello, C. An IoT surveillance system based on a decentralised architecture. Sensors 2019, 19, 1469. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ahamed, F.; Farid, F. Applying Internet of Things and machine-learning for personalized healthcare: Issues and challenges. In Proceedings of the 2018 International Conference on Machine Learning and Data Engineering (iCMLDE), Sydney, Australia, 3–7 December 2018; pp. 19–21. [Google Scholar]
- Han, T.; Nunes, V.X.; Souza, L.F.D.F.; Marques, A.G.; Silva, I.C.L.; Junior, M.A.A.F.; Sun, J.; Rebouças Filho, P.P. Internet of Medical Things—Based on Deep Learning Techniques for Segmentation of Lung and Stroke Regions in CT Scans. IEEE Access 2020, 8, 71117–71135. [Google Scholar] [CrossRef]
- Souza, L.F.d.F.; Holanda, G.B.; Alves, S.S.; Silva, F.H.d.S.; Rebouças Filho, P.P. Automatic Lung Segmentation in CT Images Using Mask R-CNN for Mapping the Feature Extraction in Supervised Methods of Machine Learning. In Proceedings of the International Conference on Intelligent Systems Design and Applications, Cham, Switzerland, 3–5 December 2019; pp. 140–149. [Google Scholar]
- Gatouillat, A.; Badr, Y.; Massot, B.; Sejdić, E. Internet of medical things: A review of recent contributions dealing with cyber-physical systems in medicine. IEEE Internet Things J. 2018, 5, 3810–3822. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.J.; Ruan, S.J.; Huang, S.W.; Peng, Y.T. Lung X-ray Segmentation using Deep Convolutional Neural Networks on Contrast-Enhanced Binarized Images. Mathematics 2020, 8, 545. [Google Scholar] [CrossRef]
- Souza, L.F.D.F.; Holanda, G.B.; Silva, F.H.D.S.; Alves, S.S. Automatic lung segmentation in CT images using mask R-CNN for mapping the feature extraction in supervised methods of machine learning using transfer learning. Int. J. Hybrid Intell. Syst. 2020, 1–17. [Google Scholar] [CrossRef]
- Wang, X.; Tino, P.; Fardal, M.A.; Raychaudhury, S.; Babul, A. Fast parzen window density estimator. In Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 3267–3274. [Google Scholar]
- Liu, C.; Pang, M. Automatic lung segmentation based on image decomposition and wavelet transform. Biomed. Signal Process. Control 2020, 61, 102032. [Google Scholar] [CrossRef]
- Doel, T.; Gavaghan, D.J.; Grau, V. Review of automatic pulmonary lobe segmentation methods from CT. Comput. Med. Imaging Graph. 2015, 40, 13–29. [Google Scholar] [CrossRef]
- Firmino, M.; Morais, A.H.; Mendoça, R.M.; Dantas, M.R.; Hekis, H.R.; Valentim, R. Computer-aided detection system for lung cancer in computed tomography scans: Review and future prospects. Biomed. Eng. Online 2014, 13, 41. [Google Scholar] [CrossRef] [Green Version]
- Zavaletta, V.A.; Bartholmai, B.J.; Robb, R.A. High resolution multidetector CT-aided tissue analysis and quantification of lung fibrosis. Acad. Radiol. 2007, 14, 772–787. [Google Scholar] [CrossRef]
- Van Rikxoort, E.M.; Van Ginneken, B. Automated segmentation of pulmonary structures in thoracic computed tomography scans: A review. Phys. Med. Biol. 2013, 58, R187. [Google Scholar] [CrossRef] [PubMed]
- Coelho, I.M.; Coelho, V.N.; Luz, E.J.d.S.; Ochi, L.S.; Guimarães, F.G.; Rios, E. A GPU deep learning metaheuristic based model for time series forecasting. Appl. Energy 2017, 201, 412–418. [Google Scholar] [CrossRef]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Wang, S.; Zhou, M.; Liu, Z.; Liu, Z.; Gu, D.; Zang, Y.; Dong, D.; Gevaert, O.; Tian, J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med. Image Anal. 2017, 40, 172–183. [Google Scholar] [CrossRef] [PubMed]
- Duraisamy, M.; Duraisamy, S. CNN-based approach for segmentation of brain and lung MRI images. Eur. J. Sci. Res. 2012, 81, 298–313. [Google Scholar]
- Cheplygina, V.; de Bruijne, M.; Pluim, J.P. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 2019, 54, 280–296. [Google Scholar] [CrossRef] [Green Version]
- Bengio, Y. Deep learning of representations for unsupervised and transfer learning. In Proceedings of the ICML Workshop on Unsupervised and Transfer Learning, Bellevue, WA, USA, 27 June 2012; pp. 17–36. [Google Scholar]
- Hwang, S.; Kim, H.E. Self-transfer learning for weakly supervised lesion localization. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 239–246. [Google Scholar]
- Rebouças Filho, P.P.; da Silva Barros, A.C.; Almeida, J.S.; Rodrigues, J.; de Albuquerque, V.H.C. A new effective and powerful medical image segmentation algorithm based on optimum path snakes. Appl. Soft Comput. 2019, 76, 649–670. [Google Scholar] [CrossRef]
- Karamitsios, K.; Orphanoudakis, T. Efficient IoT data aggregation for connected health applications. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 1182–1185. [Google Scholar]
- Dourado, C.M., Jr.; da Silva, S.P.P.; da Nobrega, R.V.M.; Barros, A.C.d.S.; Reboucas Filho, P.P.; de Albuquerque, V.H.C. Deep learning IoT system for online stroke detection in skull computed tomography images. Comput. Netw. 2019, 152, 25–39. [Google Scholar] [CrossRef]
- Yao, Y.; Xia, C.; Li, J.; Li, Q. Head CT Image Convolution Feature Segmentation and Morphological Filtering for Densely Matching Points of IoTs. IEEE Access 2020, 8, 12521–12532. [Google Scholar] [CrossRef]
- Masood, A.; Sheng, B.; Li, P.; Hou, X.; Wei, X.; Qin, J.; Feng, D. Computer-assisted decision support system in pulmonary cancer detection and stage classification on CT images. J. Biomed. Inform. 2018, 79, 117–128. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Chen, H.Y.; Su, C.Y. An enhanced hybrid MobileNet. In Proceedings of the 2018 9th International Conference on Awareness Science and Technology (iCAST), Fukuoka, Japan, 19–21 September 2018; pp. 308–312. [Google Scholar]
- Carvalho, T.; De Rezende, E.R.; Alves, M.T.; Balieiro, F.K.; Sovat, R.B. Exposing computer generated images by eye’s region classification via transfer learning of VGG19 CNN. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 866–870. [Google Scholar]
- Wan, X.; Ren, F.; Yong, D. Using Inception-Resnet V2 for Face-based Age Recognition in Scenic Spots. In Proceedings of the 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS), Singapore, 19–21 December 2019; pp. 159–163. [Google Scholar]
- Yu, X.; Zeng, N.; Liu, S.; Zhang, Y.D. Utilization of DenseNet201 for diagnosis of breast abnormality. Mach. Vis. Appl. 2019, 30, 1135–1144. [Google Scholar] [CrossRef]
- Wen, L.; Li, X.; Gao, L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl. 2020, 32, 6111–6124. [Google Scholar] [CrossRef]
- Miyahara, K.; Pazzani, M.J. Collaborative filtering with the simple Bayesian classifier. In Proceedings of the Pacific Rim International conference on artificial intelligence, Melbourne, Australia, 28 August–1 September 2000; pp. 679–689. [Google Scholar]
- Kim, T.; Adali, T. Fully complex multi-layer perceptron network for nonlinear signal processing. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2002, 32, 29–43. [Google Scholar] [CrossRef]
- Ho, T.K. Nearest neighbors in random subspaces. In Proceedings of the Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), Sydney, Australia, 11–13 August 1998; pp. 640–648. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Deng, L.; Abdel-Hamid, O.; Yu, D. A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In Proceedings of the 2013 IEEE international conference on acoustics, speech and signal processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6669–6673. [Google Scholar]
- Gao, G. A parzen-window-kernel-based CFAR algorithm for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2010, 8, 557–561. [Google Scholar] [CrossRef]
- Overton, I.M.; Padovani, G.; Girolami, M.A.; Barton, G.J. ParCrys: A Parzen window density estimation approach to protein crystallization propensity prediction. Bioinformatics 2008, 24, 901–907. [Google Scholar] [CrossRef] [PubMed]
- Beucher, S.; Meyer, F. The morphological approach to segmentation: The watershed transformation. Math. Morphol. Image Process. 1993, 34, 433–481. [Google Scholar]
- Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta (BBA) Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
- Cheetham, A.H.; Hazel, J.E. Binary (presence-absence) similarity coefficients. J. Paleontol. 1969, 43, 1130–1136. [Google Scholar]
- Niwattanakul, S.; Singthongchai, J.; Naenudorn, E.; Wanapu, S. Using of Jaccard coefficient for keywords similarity. In Proceedings of the International Multiconference of Engineers and Computer Scientists, Hong Kong, China, 13–15 May 2013; Volume 1, pp. 380–384. [Google Scholar]
- Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef] [Green Version]
- Fitzgerald, J.T.; Funnell, M.M.; Hess, G.E.; Barr, P.A.; Anderson, R.M.; Hiss, R.G.; Davis, W.K. The reliability and validity of a brief diabetes knowledge test. Diabetes Care 1998, 21, 706–710. [Google Scholar] [CrossRef] [PubMed]
Model | Classifier | ACC(%) | SEN(%) | PREC(%) | SPE(%) | F-Score(%) |
---|---|---|---|---|---|---|
InceptionResNetV2 | Bayes | 91.30 | 79.17 | 76.00 | 94.15 | 77.55 |
MLP | 96.05 | 90.00 | 90.00 | 97.54 | 90.00 | |
Nearest_Neighbors | 94.07 | 84.31 | 86.00 | 96.53 | 85.15 | |
Random_Forest | 95.26 | 80.65 | 100.00 | 100.00 | 89.29 | |
SVM_Linear | 96.44 | 91.84 | 90.00 | 97.55 | 90.91 | |
SVM_Polynomial | 96.84 | 90.38 | 94.00 | 98.51 | 92.16 | |
SVM_RBF | 97.63 | 92.31 | 96.00 | 99.00 | 94.12 | |
Xception | Bayes | 85.38 | 53.52 | 90.48 | 97.80 | 67.26 |
MLP | 98.42 | 93.18 | 97.62 | 99.52 | 95.35 | |
Nearest_Neighbors | 93.28 | 75.51 | 88.10 | 97.55 | 81.32 | |
Random_Forest | 95.26 | 84.09 | 88.10 | 97.61 | 86.05 | |
SVM_Linear | 98.02 | 89.36 | 100.00 | 100.00 | 94.38 | |
SVM_Polynomial | 97.23 | 87.23 | 97.62 | 99.51 | 92.13 | |
SVM_RBF | 97.23 | 85.71 | 100.00 | 100.00 | 92.31 | |
NASNetMobile | Bayes | 78.26 | 41.27 | 59.09 | 90.53 | 48.60 |
MLP | 94.07 | 75.44 | 97.73 | 99.49 | 85.15 | |
Nearest_Neighbors | 95.26 | 83.33 | 90.91 | 98.05 | 86.96 | |
Random_Forest | 94.07 | 93.94 | 70.45 | 94.09 | 80.52 | |
SVM_Linear | 95.26 | 82.00 | 93.18 | 98.52 | 87.23 | |
SVM_Polynomial | 95.65 | 83.67 | 93.18 | 98.53 | 88.17 | |
SVM_RBF | 96.05 | 85.42 | 93.18 | 98.54 | 89.13 | |
DenseNet201 | Bayes | 84.58 | 81.82 | 19.57 | 84.71 | 31.58 |
MLP | 97.23 | 93.33 | 91.30 | 98.08 | 92.31 | |
Nearest_Neighbors | 95.65 | 84.31 | 93.48 | 98.51 | 88.66 | |
Random_Forest | 91.30 | 96.15 | 54.35 | 90.75 | 69.44 | |
SVM_Linear | 96.84 | 95.24 | 86.96 | 97.16 | 90.91 | |
SVM_Polynomial | 96.84 | 95.24 | 86.96 | 97.16 | 90.91 | |
SVM_RBF | 96.84 | 95.24 | 86.96 | 97.16 | 90.91 | |
ResNet50 | Bayes | 83.79 | 59.18 | 58.00 | 89.71 | 58.59 |
MLP | 94.86 | 83.64 | 92.00 | 97.98 | 87.62 | |
Nearest_Neighbors | 95.26 | 83.93 | 94.00 | 98.48 | 88.68 | |
Random_Forest | 91.70 | 89.19 | 66.00 | 92.13 | 75.86 | |
SVM_Linear | 96.84 | 95.65 | 88.00 | 97.10 | 91.67 | |
SVM_Polynomial | 96.84 | 93.75 | 90.00 | 97.56 | 91.84 | |
SVM_RBF | 97.23 | 92.16 | 94.00 | 98.51 | 93.07 | |
InceptionV3 | Bayes | 85.38 | 52.54 | 77.50 | 95.36 | 62.63 |
MLP | 96.44 | 87.80 | 90.00 | 98.11 | 88.89 | |
Nearest_Neighbors | 96.05 | 82.61 | 95.00 | 99.03 | 88.37 | |
Random_Forest | 94.47 | 90.62 | 72.50 | 95.02 | 80.56 | |
SVM_Linear | 96.84 | 86.36 | 95.00 | 99.04 | 90.48 | |
SVM_Polynomial | 92.49 | 69.81 | 92.50 | 98.50 | 79.57 | |
SVM_RBF | 97.63 | 94.74 | 90.00 | 98.14 | 92.31 | |
VGG19 | Bayes | 85.38 | 50.00 | 5.41 | 85.94 | 9.76 |
MLP | 96.84 | 87.18 | 91.89 | 98.60 | 89.47 | |
Nearest_Neighbors | 96.05 | 84.62 | 89.19 | 98.13 | 86.84 | |
Random_Forest | 96.05 | 84.62 | 89.19 | 98.13 | 86.84 | |
SVM_Linear | 96.84 | 82.22 | 100.00 | 100.00 | 90.24 | |
SVM_Polynomial | 96.84 | 83.72 | 97.30 | 99.52 | 90.00 | |
SVM_RBF | 96.84 | 96.77 | 81.08 | 96.85 | 88.24 | |
DenseNet169 | Bayes | 83.79 | 0.00 | 0.00 | 83.79 | 0.00 |
MLP | 96.05 | 84.44 | 92.68 | 98.56 | 88.37 | |
Nearest_Neighbors | 94.47 | 77.55 | 92.68 | 98.53 | 84.44 | |
Random_Forest | 94.47 | 90.91 | 73.17 | 95.00 | 81.08 | |
SVM_Linear | 96.84 | 88.37 | 92.68 | 98.57 | 90.48 | |
SVM_Polynomial | 96.84 | 88.37 | 92.68 | 98.57 | 90.48 | |
SVM_RBF | 96.84 | 88.37 | 92.68 | 98.57 | 90.48 | |
MobileNet | Bayes | 87.35 | 55.56 | 89.74 | 97.89 | 68.63 |
MLP | 97.23 | 86.36 | 97.44 | 99.52 | 91.57 | |
Nearest_Neighbors | 95.26 | 80.00 | 92.31 | 98.56 | 85.71 | |
Random_Forest | 96.05 | 83.72 | 92.31 | 98.57 | 87.80 | |
SVM_Linear | 97.63 | 88.37 | 97.44 | 99.52 | 92.68 | |
SVM_Polynomial | 97.23 | 90.00 | 92.31 | 98.59 | 91.14 | |
SVM_RBF | 96.44 | 85.71 | 92.31 | 98.58 | 88.89 |
Metrics | R-Mask CNN | Proposed Method |
---|---|---|
Mcc | 58.03 ± 06.46 | 93.72 ± 03.99 |
Acc | 89.96 ± 04.38 | 98.34 ± 01.13 |
Sen | 88.03 ± 05.41 | 98.84 ± 00.97 |
Jaccard | 87.69 ± 05.69 | 97.93 ± 01.64 |
HD | 7.21 ± 0.24 | 7.03 ± 00.28 |
DICE | 93.34 ± 3.38 | 98.95 ± 00.85 |
Methods | Mcc | Acc | Sen |
---|---|---|---|
Proposed Method | 93.72 ± 03.99 | 98.34 ± 01.13 | 98.84 ± 00.97 |
VFC | 92.13 ± 03.20 | 97.88 ± 01.29 | 98.26 ± 00.66 |
GVF | 90.00 ± 06.45 | 96.79 ± 03.61 | 96.75 ± 05.10 |
RHT mod | 91.34 ± 03.09 | 97.56 ± 01.63 | 98.22 ± 01.28 |
RHT multi | 85.38 ± 11.85 | 95.77 ± 03.96 | 97.68 ± 03.45 |
Methods | Jaccard | HD | DICE |
---|---|---|---|
Proposed Method | 97.93 ± 01.64 | 7.03 ± 00.28 | 98.95 ± 00.85 |
VFC | 87.62 ± 04.81 | 6.92 ± 01.50 | 93.33 ± 02.82 |
GVF | 85.11 ± 07.71 | 7.55 ± 02.29 | 91.76 ± 04.84 |
RHT mod | 86.65 ± 03.79 | 6.93 ± 01.79 | 92.81 ± 02.21 |
RHT multi | 78.82 ± 14.43 | 8.23 ± 02.91 | 87.29 ± 11.18 |
Methods | Average Time (s) |
---|---|
GVF | 240.000 ± 3.05 |
VFC | 030.00 ± 2.67 |
OPS | 008.27 ± 2.63 |
OPS Euclidean | 005.86 ± 1.96 |
SISDEP | 004.90 ± 2.02 |
CRAD | 002.00 ± 0.16 |
Proposed Method | 005.43 ± 0.21 |
Methods | Mcc | Acc | Sen | Jaccard | HD | DICE |
---|---|---|---|---|---|---|
VFC | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
GVF | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
RHT mod | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
RHT multi | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
Methods | Metrics | ||
---|---|---|---|
Acc | Sen | DICE | |
Proposed Method | 98.34 ± 01.13 | 98.84 ± 00.97 | 98.95 ± 00.85 |
Mask + Bayes | 86.42 ± 11.11 | 91.06 ± 14.66 | 76.10 ± 16.49 |
Mask + K-means | 97.68 ± 03.42 | 96.58 ± 08.58 | 97.33 ± 03.24 |
Mask + EM | 97.28 ± 03.85 | 95.86 ± 08.67 | 87.63 ± 09.39 |
Mask + SVM | 95.78 ± 02.62 | 96.69 ± 10.24 | 86.05 ± 11.21 |
Method | Second |
---|---|
Proposed Method | 05.43 ± 00.21 |
Mask + K-means | 11.24 ± 02.57 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Souza, L.F.d.F.; Silva, I.C.L.; Marques, A.G.; Silva, F.H.d.S.; Nunes, V.X.; Hassan, M.M.; Albuquerque, V.H.C.d.; Filho, P.P.R. Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation. Sensors 2020, 20, 6711. https://doi.org/10.3390/s20236711
Souza LFdF, Silva ICL, Marques AG, Silva FHdS, Nunes VX, Hassan MM, Albuquerque VHCd, Filho PPR. Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation. Sensors. 2020; 20(23):6711. https://doi.org/10.3390/s20236711
Chicago/Turabian StyleSouza, Luís Fabrício de Freitas, Iágson Carlos Lima Silva, Adriell Gomes Marques, Francisco Hércules dos S. Silva, Virgínia Xavier Nunes, Mohammad Mehedi Hassan, Victor Hugo C. de Albuquerque, and Pedro P. Rebouças Filho. 2020. "Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation" Sensors 20, no. 23: 6711. https://doi.org/10.3390/s20236711
APA StyleSouza, L. F. d. F., Silva, I. C. L., Marques, A. G., Silva, F. H. d. S., Nunes, V. X., Hassan, M. M., Albuquerque, V. H. C. d., & Filho, P. P. R. (2020). Internet of Medical Things: An Effective and Fully Automatic IoT Approach Using Deep Learning and Fine-Tuning to Lung CT Segmentation. Sensors, 20(23), 6711. https://doi.org/10.3390/s20236711