[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Paper The following article is Open access

COVID-19 detection from lung CT-scan images using transfer learning approach

and

Published 19 July 2021 © 2021 The Author(s). Published by IOP Publishing Ltd
, , Citation Arpita Halder and Bimal Datta 2021 Mach. Learn.: Sci. Technol. 2 045013 DOI 10.1088/2632-2153/abf22c

2632-2153/2/4/045013

Abstract

Since the onset of 2020, the spread of coronavirus disease (COVID-19) has rapidly accelerated worldwide into a state of severe pandemic. COVID-19 has infected more than 29 million people and caused more than 900 thousand deaths at the time of writing. Since it is highly contagious, it causes explosive community transmission. Thus, health care delivery has been disrupted and compromised by the lack of testing kits. COVID-19-infected patients show severe acute respiratory syndrome. Meanwhile, the scientific community has been involved in the implementation of deep learning (DL) techniques to diagnose COVID-19 using computed tomography (CT) lung scans, since CT is a pertinent screening tool due to its higher sensitivity in recognizing early pneumonic changes. However, large datasets of CT-scan images are not publicly available due to privacy concerns and obtaining very accurate models has become difficult. Thus, to overcome this drawback, transfer-learning pre-trained models are used in the proposed methodology to classify COVID-19 (positive) and COVID-19 (negative) patients. We describe the development of a DL framework that includes pre-trained models (DenseNet201, VGG16, ResNet50V2, and MobileNet) as its backbone, known as KarNet. To extensively test and analyze the framework, each model was trained on original (i.e. unaugmented) and manipulated (i.e. augmented) datasets. Among the four pre-trained models of KarNet, the one that used DenseNet201 demonstrated excellent diagnostic ability, with AUC scores of 1.00 and 0.99 for models trained on unaugmented and augmented data sets, respectively. Even after considerable distortion of the images (i.e. the augmented dataset) DenseNet201 achieved an accuracy of 97% for the test dataset, followed by ResNet50V2, MobileNet, and VGG16 (which achieved accuracies of 96%, 95%, and 94%, respectively).

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

In December 2019, an outbreak of a novel coronavirus (COVID-19) disease occurred in Wuhan, China. A handful of genetic and structural analyses have identified that a protein on its surface makes it a highly contagious virus that can spread rapidly. On January 30 2020, COVID-19 was declared a global health emergency by the World Health Organization (WHO) [1]. On February 11 2020, a new name for this virus was introduced by the International Committee on Taxonomy of Viruses, 'Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2).' More than 29 million cases have been confirmed and 929 thousand deaths have occurred worldwide at the time of writing this paper. The fatality rate is 2%, but COVID-19 is still an acute disease. However, 19.9 million people have recovered. The common symptoms of this disease include fever, cough, sore throat, headache, diarrhea, and shortness of breath. Abnormal signs have also been noticed in CT chest images of patients. Patients can suffer from acute respiratory distress, multiple-organ failure, and ultimately, death. This disease is transmitted from a COVID-19 infected person to another person through micron-sized droplets expelled from the mouth and/or nose of that person while sneezing, coughing, or even speaking. It has been noticed that the elderly segment of the population is prone to be infected at a much higher rate than younger age groups. Due to the unavailability of vaccines or therapeutic treatments, quarantine and the early diagnosis of patients play the most important roles in controlling the spread of this disease.

Originally, a real-time reverse transcription polymerase-chain reaction (RT-PCR) was the only technique for detecting this virus in respiratory samples, but due to its time-consuming and unreliable results, it has faced challenges in preventing the spread of COVID-19 in the community [2]. In addition, infected cases cannot be identified in a timely manner because of the inadequate number of RT-PCR test-kits, meaning that the virus may continue to spread among the healthy population without having been recognized. To alleviate the inefficiency and lack of testing kits for COVID-19, researchers are continually developing alternative testing methods, including radiological images such as x-rays and chest CT scans. The radiological characteristics of this disease can be effectively determined by these techniques. The priority for radiologists is x-ray machines, since they are widely available in hospitals. However, chest x-ray images fail to accurately detect soft tissues. Hence, to to address this problem, chest CT scan images are used, which can detect soft tissues efficiently and generate results at a faster rate [3]. To automate the whole process, deep learning (DL) techniques have been developed by researchers [4] that can automatically interpret CT scan images and determine whether a person is COVID positive or not. The results generated by this work are quite promising, but there are two limitations. First, there is the lack of publicly available lung image CT-scan datasets due to the privacy concerns of patients. This has an immense impact on the research and development of more advanced AI methods for more precise results. To meet the clinical standard for results, DL techniques demand huge datasets during model training. Considering the current situation, in which medical professionals are very involved in caring for COVID-19 patients, it is unlikely that they will have the time to collect and annotate a huge COVID-19 CT-scan dataset. Secondly, as these works are not sharable, the trained model of one hospital cannot be used in another, and so the results cannot be reproduced.

To address this drawback, a transfer learning approach is used to build the neural network framework, KarNet, described in this paper. In transfer learning, a pre-trained model with predefined weights and biases is used to train models on a custom dataset. This, in turn, reduces the time required to train the the model, and minimizes the implementation complexity associated with initializing the weights and biases for layers in deep neural network models [5]. Medical image classification plays an essential role in clinical treatment and teaching tasks. Maithra et al briefly described the use of transfer learning in medical image analysis [6]. Despite being trained on the ImageNet dataset, transfer learning models have the feature-independent benefits of pre-trained weights, such as better scaling and convergence speedups. Samir et al applied a convolutional neural network (CNN) and a transfer learning-based algorithm to a chest x-ray data set to classify pneumonia and concluded that transfer learning was a more useful classification method for a small dataset than a CNN [7]. Transfer learning has also excelled in lung cancer detection using CT scan images [8]. The current success of transfer learning models used in medical image classification thus demonstrates better accuracy than CNN or other machine learning models. Due to the availability of small COVID-19 datasets, a pre-trained neural network was used for the diagnosis of infected patients. The use of transfer learning techniques based on CT lung scan images for COVID-19 detection has been very limited. However, KarNet provides satisfactory results even after extreme manipulation of images in the dataset and performs better than the current state of the art.

2. Related works

Since the outbreak of COVID-19, researchers have continually been developing methods to screen for this virus. Due to the previous success of DL techniques in medical image analysis, researchers have used CT-scans and x-rays detect COVID-19. Diagnosis models using chest x-rays include that of Chowdhury et al, who used a CNN to build their model [9]. Many previous works of research literature have described pre-trained networks that were used to diagnose COVID-19; for example, [1012] used ResNet and achieved accuracies of 96%, 99%, and 91%, respectively; Li et al used DenseNet121 on a total sample of 429 x-ray images and achieved an 88% accuracy with an area under the curve (AUC) score of 0.97 [13], Rahaman et al examined 15 different pre-trained CNN models and obtained the highest classification accuracy of 89.3% using VGG-19 [14]. Asnaoui and Chawki [15] used InceptionResnetV2, DenseNet201, Resnet50, MobilenetV2, InceptionV3, VGG16, and VGG19. The highest accuracy of 92.18% was obtained using InceptionResnetV2.

It has been noted in the literature that CT scans generate lower false-positive rates than x-rays. Multiple CNN models to classify COVID-positive patients from CT scan images were implemented by Wu et al [16]. Using CT slices, Wang et al [17] proposed a 3D deep CNN (DeCovNet) to detect COVID-19. He et al [18] introduced a dataset comprising a few hundred images of lung CT scans to detect COVID-19 and proposed an approach named Self-Trans (i.e. self-supervised learning with transfer learning). Also, Zheng et al [17] proposed a weakly supervised DL technique for COVID-19 detection using 3D CT scans; 3D lung samples were segmented using a pre-trained U-Net, and a DL technique was then applied to them for the prediction of infected regions. The reported model accuracy was 95.9%. A novel weakly supervised DL framework was proposed by Hu et al [19], which was capable of detecting and localizing lesions on COVID-19 and community-acquired pneumonia CT images from image-level labels. The performance achieved was around 89% for the detection of COVID-19-positive patients, with an AUC score of 0.923. According to the review paper of Roberts et al [20] a wide range of models have used lung segmentation as a pre-processing step, and 2D models have used transfer learning with a network pre-trained on ImageNet. An accuracy of eighty-seven percent was obtained by Sarker et al [21] using DenseNet-121 on chest radiographic images. Li et al [22] proposed COVNet and achieved an accuracy of 95% using chest CT-scan images, but due to privacy concerns, their dataset of over 4357 chest CT images of 3322 patients is not publicly available. Yang et al [23] developed a diagnosis system (DeepPneumonia) using DL techniques to identify COVID-19 patients; their model attained an excellent AUC of 0.99. The localization of the main lesion features, especially ground-glass opacity, is also an added feature of their model. Due to the scarcity of publicly available datasets, Maghdid et al [24] collected images of chest x-rays and lung CT scans from different sources and trained a CNN and a modified AlexNet. To classify COVID-19 versus other types of pneumonia, Bai et al [25] carried out lung segmentation of abnormal CT slices, and then used an EfficientNet B4 deep neural network architecture followed by a two-layer fully connected neural network to pool slices together. U-Net and Res-Net were used for the segmentation and grain-localization of infected lungs by Greenspan et al [4] to detect COVID-19-positive patients. To segment lung images, an unsupervised clustering technique was used. More than 110 infected patients were tested and an accuracy of 94.80% was obtained using this method. Shah et al [26] proposed a method based on a CNN that had an accuracy of 82.1% as well as five transfer-learning models, out of which, VGG-19 outperformed the others, with an accuracy of 94.52%.

Therefore, to overcome the issues with the existing models, a framework based on transfer learning is proposed in this paper to classify COVID-19 infected patients.

3. Dataset

The proposed work deals with COVID-19 detection using lung CT-scans. Angelov et al prepared a lung CT scan image dataset [27] that has two classes, COVID and non-COVID, by collecting real patient CT-scans from the hospitals of São Paulo, Brazil. It is publicly available at www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset. A total of 2481 CT-scan images are used for training and testing purposes. The SAR-CoV-2 CT-scan dataset contains 1252 CT scans of COVID-19-positive patients and 1229 CT scans that belong to COVID-19 negative patients. Figure 1 represents a sample of the lung CT scan images in the dataset. The database was split using a ratio of 8:2 for training and testing purposes.

Figure 1. Refer to the following caption and surrounding text.

Figure 1. SARS-CoV-2 CT-scan dataset sample images: (a) COVID-19-positive patient's chest CT-scans (b) COVID-negative patient's chest CT-scans.

Standard image High-resolution image

4. Methodology

A simple two-dimensional DL framework based on CT-scan images using transfer learning is developed and called KarNet. Transfer learning is popular for building a model in a short time span with a minimal dataset. In the proposed work, data augmentation removed the need for a large lung CT scan dataset. The study was attempted in three phases to achieve the highest possible accuracy: data pre-processing, feature extraction, and binary classification. In this work, four pre-trained models were used as the backbone, namely: DenseNet201 [28], MobileNet [29], ResNet50V2 [30] and VGG16 [31] and they were integrated with additional layers to evaluate each model's performance separately on augmented as well as unaugmented datasets. Previously, as described in the literature, Jaiswal et al [32] achieved accuracies of 96.25% and 95.45% on a test set of SAR-Cov-2 data [27] using DenseNet201 and VGG16, respectively. However, KarNet outperformed this score using the same dataset and even gave promising results with image augmentation. The architecture of the proposed methodology for detecting COVID-19 using lung CT-scan images is shown in figure 2.

Figure 2. Refer to the following caption and surrounding text.

Figure 2. Architecture of the proposed model with transfer learning models for feature extraction and a CNN for classification.

Standard image High-resolution image

4.1. Input layer

In this study, the CT-scan images had to be compatible with the pre-trained transfer-learning models to extract features from of them and classify them properly. In a simple pre-processing step, input (224 × 224 × 3 pixels) images were normalized to the interval [0, 1]. The image dataset, with two classes, was then divided into training and testing categories. The training images were fed into one of the pre-trained model layers to extract their features. The pre-trained models were able to classify lung CT scans based upon class labels assigned to the training dataset, i.e. COVID versus non-COVID.

4.2. Pre-trained model layers

A pre-trained model is generally trained on a large benchmark dataset. DenseNet201, VGG16, ResNet50V2 and MobileNet were investigated using the proposed framework and dataset, to evaluate each model's performance according to metric values. Each pre-trained model was divided into two parts, namely a convolutional base and a classifier. For feature generation from the image, a stack of convolution layers was paired with pooling layers. The classifier was responsible for categorizing the image based on the extracted features. In the pre-trained model layers, the convolutional base was re-trained and the classifier was removed. Additional layers were added and the classifier was replaced with another classifier for COVID-19-positive or COVID-19-negative detection.

4.3. Additional layers

The activations from the transfer-learning pre-trained model layers were fed into the additional layers. The layers acted as classifiers for COVID-19-positive and COVID-19-negative patients. In the additional layers, the first average pooling layer was used, which was believed to remove the average values of the features from the feature maps. The 2D average pooling block reduced the size of the data, the number of parameters and the amount of computation needed. Pooling also controlled overfitting. The activations were then flattened and two fully connected layers were added, the first layer with 128 nodes, and the second with 64 nodes. To avoid overfitting, a dropout layer was added in between these dense layers. Subsequently, from the second dense layer, the activations were fed into a Softmax layer with two nodes, which provided the probability for each of COVID-19 positive and COVID-19 negative. The softmax function is represented by $\sigma \left( {{x_j}} \right)$ in equation (1):

Equation (1)

The input data were normalized on a scale of [0, 1]. Also, the output was always 1. As a result, the neural network model classified the instance as a class that had an index of the maximum output. As the softmax activation function is recommended to be used with a categorical cross-entropy loss function, it was therefore used with an Adam optimizer. The categorical cross-entropy calculates the loss by computing the sum represented in equation (2), where $\widehat {{y_i}}$ is the $i{\text{th}}$ scalar value in the model's output, ${y_i}$ is the corresponding target value and the number of scalar values in the model output is the output size:

Equation (2)

4.4. Data augmentation

To obtain a diverse dataset and prevent overfitting, data augmentation was performed for each training group of CT-scan images. In the proposed methodology, data augmentation took place in three ways, as follows: (a) for image rotation, an angle of rotation between $ - 20^\circ $ and $20^\circ \,$was randomly selected, (b) an image shift in the range of 0.2 was implemented for the width and the height and (c) horizontal flipping was enabled. Each pre-trained model, including additional layers, was trained on the augmented as well as the unaugmented SAR-CoV-2 lung CT-scan datasets [17] and the individual performances were evaluated quantitatively.

4.5. Transfer learning model architecture

In comparison with traditional machine-learning methods, CNN-based transfer learning models have the following upper hand: (a) less pre-processing of the dataset is required, (b) the process of learning is faster, (c) optimization of numerous parameters can adjust the time complexity and (d) such models work incredibly well with limited datasets, and are thus suitable for use as medical image classifiers. In the proposed work, four pre-trained transfer-learning-based models are used for the binary image classification task, namely: DenseNet201, VGG16, ResNet50V2 and MobileNet, which were previously trained on the ImageNet dataset to classify 1000 object categories. These pre-trained models are re-trained for the binary classification of a lung CT scan into two classes.

All four models require an input size of (224 × 224 × 3). MobileNet has a lightweight architecture comprising only 30 layers, which makes it an effective CNN for mobile vision applications. Depthwise separable convolution is used by MobileNet, which means that it performs convolution on each color channel, rather than combining all three channels and flattening them. Training MobileNet is thus less time-consuming. Table 1 represents the time taken to train each model with and without augmented images. As previously described in the literature, Ebru et al [33] used MobileNet for the detection of COVID-19 using chest x-ray images, obtaining an accuracy of 87%. However, our framework yields better results on CT-scan images using MobileNet, making the use of this particular model mobile application friendly. However, ResNet50V2 took the least time while training the model on the unaugmented database and also performed marginally better than MobileNet on the augmented dataset. The residual network (ResNet) was the first neural network that was able to train thousands of layers without succumbing to the 'vanishing gradient' problem. ResNet50V2 uses batch normalization before each weight layer and possesses 50 layers in total. The well-known pre-trained transfer-learning model VGG16 has been used in medical image classification for COVID-19 detection to a significant extent [18, 32]. Angelov et al [27] scored an accuracy of 94.96% on a SAR-CoV-2 CT-scan dataset. Utilizing our framework, VGG16 only worked reasonably well for models trained on unaugmented images.

Table 1. Time taken by the models during the training process using augmented and unaugmented SAR-CoV-2 datasets to complete 500 epochs.

ModelsUsing unaugmented dataUsing augmented data
DenseNet20156.83 min159.65 min
VGG1640.81 min163.22 min
ResNet50V234.0 min161.2 min
MobileNet48.57 min174.11 min

In the proposed work, DenseNet201 is the best-performing model when trained on the unaugmented and augmented datasets, thus its architecture is explained in detail in figure 3. The fundamental building block of the ResNet architecture is that the previous layer is merged with the future layer. Additive merges force the network to learn residuals, meaning that DenseNet proposes a concatenation of outputs from the preceding layer instead of using a summation. DenseNet (Dense Convolution Network) connects each layer to every other layer in a feed-forward fashion. DenseNet201 is a CNN that is 201 layers deep. It has compelling advantages, such as (a) mitigating the vanishing-gradient problem, (b) reinforcing feature propagation, (c) stimulating feature re-usability and (d) parameter reduction. DenseNets are easily trainable due to the enriched flow of information and gradients throughout the network. Based on the original input signal and the loss function, each layer has direct access to the gradients, which helps in the training of deeper network architectures. Furthermore, to alleviate over-fitting in tasks with smaller training set sizes, dense connections use a regularizing effect.

Figure 3. Refer to the following caption and surrounding text.

Figure 3. Architecture of DenseNet201.

Standard image High-resolution image

4.6. Implementation details

Since the transfer-learning approach allows pre-trained models to be transferred to re-training, it saves computational time as well as reducing the system requirements for model training. For developing countries, it plays a vital role in experimentation, since training a model from scratch demands high-end hardware systems. The current work is implementated using Tensorflow and 12 GB of RAM is utilized. Five hundred epochs and a batch size of 32 are maintained throughout the experiment.

5. Quantitative analysis

For each of the four different pre-trained models (DenseNet201, VGG16, ResNet50V2, and MobileNet) in KarNet, the classification performance is evaluated using various confusion-matrix-based performance metrics for models trained with augmented and unaugmented image datasets. These performance metrics are the F1 score, accuracy, precision, recall and specificity on a testing set of CT-scan images. The main objective of this paper is to determine whether the person has been infected with COVID-19 or not. The outcome can be positive (indicating that the patient has been infected with the virus) or negative (indicating that the patient has not been infected with the virus). The test results for every patient may or may not fall into the same category as the patient's actual class. In this adjustment, it is assumed that a true positive (TP) demonstrates that COVID-19-positive patients are accurately recognized as COVID-19- positive patients and that a true negative (TN) shows that COVID-19-negative patients are recognized as COVID-19 negative. A false positive (FP) represents the COVID-19-negative patients who are incorrectly recognized as COVID-19-positive patients. Finally, a false negative (FN) denotes COVID-19-positive patients who are incorrectly recognized as COVID-19 negative. In order to justify the performance of the model, the following performance measures are used:

Equation (3)

The accuracy $({A_{{\text{covid19}}}})$ is a measure of all the classes that are correctly recognized. In this paper, the COVID-19 positive and COVID-19 negative classes are equally important; hence, ${A_{{\text{covid19}}}}$ in equation (3) is used, which can be calculated as the number of all correct classifications divided by the total number of items:

Equation (4)

However, the division of data is not determined by ${A_{{\text{covid19}}}}$. Thus, the F-measure in equation (4) is used to handle distribution problems with accuracy. The precision evaluates the exactness of the classifiers. A large FP number corresponds to a low ${P_{{\text{covid19}}}}$ value. Recall, $\left( {{R_{{\text{covid19}}}}} \right)$, also known as sensitivity, defines the completeness of the classifier. A large FN corresponds to a low ${R_{{\text{covid19}}}}$. The mathematical formulations are given below in equations (5) and (6), respectively:

Equation (5)

Equation (6)

Specificity $({S_{{\text{covid19}}}})$ defines the proportion of actual negatives that are accurately classified as such (e.g. the percentage of people not affected by the COVID-19 disease who are correctly classified as COVID-19 negative patients. ${S_{{\text{covid19}}}}$ is formulated in equation (7):

Equation (7)

The receiver operating characteristic curve (ROC) curve is also plotted and the AUCs are calculated for each of the four different models. The ROC curve is commonly used in binary classification problems and this graphical plot illustrates the diagnostic ability of classifier using TP and FP rates at various decision thresholds. In medical image analysis, especially in COVID detection, it is very important to achieve minimum FP and FN values, since this demonstrates the superior classification performance of the model. Misclassification of COVID-19 patients may lead to a false diagnosis for patients.

6. Results and discussion

For exhaustive testing, each pre-trained model with CNN layers is trained on original images (i.e. the unaugmented dataset) and distorted images (i.e. the augmented dataset). This helps to investigate the capability of the KarNet architecture using dual scenarios. The training performances on the unaugmented and augmented datasets are shown in table 2. A test analysis of the KarNet models based on four transfer-learning pre-trained models used for the classification of COVID-19 positive and COVID-19 negative patients based on lung CT scans before and after the augmentation of images used to train them is summarized in table 3. The four models, namely DenseNet201, VGG16, ResNet50V2, and MobileNet were analyzed based on metric values for accuracy, sensitivity, specificity and F1 score. From the table, it is noticeable that all four models showed excellent classification performance on the testing set. ResNet50V2 achieved an accuracy of 96% for both models trained using augmented and unaugmented images. During testing data analysis, VGG16 achieved a 97% accuracy in the case of the original (i.e. unaugmented) dataset-trained model but scored 94% for the augmented dataset. However, the performance of MobileNet only differed marginally for both the dataset trained models. The model trained with unaugmented images achieved an accuracy of 96% and the other model reached 95% accuracy.

Table 2. Training and validation accuracy of the models using unaugmented and augmented datasets.

ModelsUsing unaugmented dataUsing augmented data
TrainingValidationTrainingValidation
DenseNet20199%97%98%97%
VGG1699%96%94%94%
ResNet50V299%96%96%95%
MobileNet99%97%97%95%

Table 3. Testing analysis using models trained on unaugmented and augmented SAR-Cov-2 lung CT-scan images.

Model trained on unaugmented lung CT-scan image dataset
ModelsAccuracyPrecisionRecallSpecificityF1-score
DenseNet20197%0.950.980.950.97
VGG1697%0.960.980.960.97
ResNet50V296%0.970.940.970.96
MobileNet96%0.950.970.940.96
Model trained on augmented lung CT-scan image dataset
DenseNet20197%0.950.980.950.97
VGG1694%0.950.940.950.94
ResNet50V296%0.950.970.950.96
MobileNet95%0.940.960.930.95

DenseNet201 performed exceptionally well in both scenarios, scoring 97% accuracy even after extreme manipulation of the augmented lung CT-scan images during training. Figure 4 represents accuracy and loss analysis of the training and validation dataset of DenseNet201 with the respective number of epochs. Ninety-eight percent of COVID-19 patients are correctly recognized as COVID-19 patients in both the trained models of DenseNet201. The impacts of TP and TN are demonstrated with the help of a confusion matrix in figure 5. A graphical illustration of the ROC curve with the AUC values for models trained with the original and the augmented datasets is presented in figure 6. The AUC score of DenseNet201 is 1.00 and the rest of the models scored 0.99 for models trained on unaugmented CT scans. For models trained with distorted images (i.e. the augmented dataset) DenseNet201 and MobileNet gained AUC scores of 0.98 while the others scored 0.97. Hence, DenseNet201 surpasses the other transfer learning models used in this work. Thus, with a relatively small dataset, KarNet overcomes many drawbacks and givesdemonstrates excellent diagnostic ability as a binary classifier. Table 4 provides a comparison between other literature's transfer learning models' performance and ours. In the future, a sustainable artificial intelligence system will be established that can continue to train the proposed framework using widely collected lung CT images.

Figure 4. Refer to the following caption and surrounding text.

Figure 4. Training and validation analysis over 500 epochs of the re-trained DenseNet201model using the unaugmented dataset. (a) Training and testing model accuracy analysis (b) training and testing model loss analysis.

Standard image High-resolution image
Figure 5. Refer to the following caption and surrounding text.

Figure 5. Confusion matrix analysis of the proposed re-trained DenseNet201model trained on: (a) the unaugmented dataset (b) the augmented dataset.

Standard image High-resolution image
Figure 6. Refer to the following caption and surrounding text.

Figure 6. AUC of all transfer learning models trained on: (a) the unaugmented dataset (b) the augmented dataset.

Standard image High-resolution image

Table 4. Comparison with other methods. All the above references used transfer-learning-based methodologies to classify CT-scan images as COVID-19 positive or negative with different accuracies according to the specific models. The highest accuracy and model are in bold in the above table.

ReferenceTotal CT-scan samplesPre-trained modelAccuracy
Shah et al [26]738VGG-1994.52%
DenseNet16993.15%
VGG-1689%
ResNet5060%
InceptionV353.4%
Bai HX et al [25]118 401EfficientNet B496%
Maghdid et al [24]339AlexNet82%
Angelov et al [27]2481VGG-1694.96%
Jaiswal et al [32]2481DenseNet20196%
VGG-1695%
Resnet 152V294.91%
Inception ResNet90.90%
Ours2481 DenseNet201 97%
VGG-1694%
ResNet50V296%
MobileNet95%

7. Conclusion

KarNet, a simple two-dimensional DL framework, provides an excellent diagnostic performance in the detection of COVID-19 patients using lung CT-scan images. The KarNet model is based on transfer learning pre-trained models. In this experiment, each model is trained on unaugmented (i.e. original) and augmented (i.e. manipulated) datasets to investigate the framework's capability to a greater extent, and it was found that DenseNet201 appears to perform the best, since it reached an accuracy of 97% for the testing and validation datasets for both models, i.e. those trained using augmented and unaugmented CT-scan datasets. VGG16, ResNet50V2 and MobileNet also achieved promising accuracy on the test images. The proposed architecture significantly improves the diagnostic ability of the model, and achieves an excellent AUC score. Therefore, KarNet has been proven to outperform the current state of the art and to classify COVID-19 patients precisely.

Since most hospitals are equipped with CT-scanners, the proposed model can be implemented to improve COVID-19 testing methods. Hence, this model can serve as an automatic alternative testing process, saving time and the lives of infected patients before it is too late.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

Please wait… references are loading.
10.1088/2632-2153/abf22c