[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Brain Tumor Classification Based on Enhanced CNN Model

Brain Tumor Classification Based on Enhanced CNN Model

Naveen MukkapatiM.S. Anbarasi 

Department of Computer Science & Engineering, Pondicherry Engineering College, Pondicherry 605014, India

Department of Computer Science & Engineering, RVR & JC College of Engineering, Chowdavaram, Guntur, Andhra Pradesh, India

Department of Information Technology, Pondicherry Engineering College, Pondicherry 605014, India

Corresponding Author Email: 
naveenkumar105@gmail.com
Page: 
125-130
|
DOI: 
https://doi.org/10.18280/ria.360114
Received: 
12 October 2021
|
Revised: 
2 December 2021
|
Accepted: 
8 December 2021
|
Available online: 
28 Feburary 2022
| Copy
" data-placement="left">Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain tumor classification is important process for doctors to plan the treatment for patients based on the stages. Various CNN based architecture is applied for the brain tumor classification to improve the classification performance. Existing methods in brain tumor segmentation have the limitations of overfitting and lower efficiency in handling large dataset. In this research, for brain tumor segmentation purpose the enhanced CNN architecture based on U-Net, for pattern analysis purpose RefineNet and for classifying brain tumor purpose SegNet architecture is proposed. The brain tumor benchmark dataset was used to analysis the efficiency of the enhanced CNN model. The U-Net provides good segmentation based on the local and context information of MRI image. The SegNet selects the important features for classification and also reduces the trainable parameters. When compared with the existing methods of brain tumor classification, the enhanced CNN method has the higher performance. The enhanced CNN model has the accuracy of 96.85% and existing CNN with transfer learning has 94.82% accuracy.

Keywords: 

brain tumor classification, brain tumor benchmark, U-Net, RefineNet, SegNet

1. Introduction

Brain cancer classification is a significant task to select the treatment for the patients that requires the physician’s knowledge and experience. Automatic classification of brain cancer classification system act as a decision model for radiologists to identify the tumors. The current system accuracy is need to be improved for suitable treatments [1]. Brain tumor classification system will assist the doctor to evaluate prognosis, aggressiveness and growth of brain tumor. The types of brain tumor are need to be classified to assist the doctor. Different types of brain tumors are Glioma, benign, and malignant [2]. Recently, machine learning techniques achieves the significant performance on the image analysis and provides nearly same accuracy as trained specialists for the detection of the brain tumor. Deep learning techniques provides the significant improvement in the brain tumor detection and other medical image analysis [3]. Tumor diagnosis and treatment requires various features like size and position of the tumor in the brain Magnetic Resonance Imaging (MRI) [4]. Generally, MRI screening techniques are highly preferred by doctors for the estimate the structure of tumor for pre and post treatment [5].

Various imaging methods can be applied to identify and categorize the tumor and MRI is a commonly used non-invasive method. MRI screening method doesn’t use ionizing radiation during the scan, provides high resolution soft-tissue and apply imaging parameters to acquire different images [6, 7]. Convolutional Neural Network (CNN) based methods have been successfully applied and achieved the successful performance in the brain tumor classification. Advantages of CNN based methods are manually segmented portion of tumor are not needed for classification and provide fully automated classification [8]. Most common problem in existing CNN based brain tumor classification is having lower performance in the publicly available dataset [9, 10]. In this research, the enhanced CNN model is introduced to increases the efficiency of brain tumor detection. The enhanced CNN model is based on three techniques such as U-Net, RefineNet and SegNet. The U-Net method is used for the segmentation of brain tumor, RefineNet method is used for the pattern analysis and SegNet is used for the classification. The analysis shows that the enhanced CNN model has higher performance compared to existing method.

2. Literature Survey

Brain tumors are among the deadliest types of cancer, with a poor survival rate. Early diagnosis and categorization of brain tumors aids in the efficient treatment of the malignancy. Brain tumor classification recent researches were reviewed in this section.

Sajjad et al. [11] developed an extensive data augmentation approach and a CNN-based model for brain tumor classification. The tumor region is segmented from the picture using CNN, and the data augmentation approach is used to train the CNN model. The augmented data is used to fine-tune the pre-trained CNN model for classification of brain tumor. The proposed CNN model is tested on the brain tumor dataset and shows the higher performance in the classification. The overfitting problem of the CNN model is need to be reduce to increases the efficiency of the brain tumor classification.

Anaraki et al. [12] presented a CNN model with a Genetic Algorithm (GA) for brain tumor classification in MRI images. The CNN architecture is evolved using GA method and bagging as an ensemble method to decreases the variance of prediction error. The result shows that CNN with GA method has higher efficiency in 3 class classification. The designed CNN with GA technique was evaluated using a brain tumor dataset. To train the CNN model for brain tumor classification, the data augmentation approach is utilized. The effectiveness of CNN model is required to be improve in the brain tumor classification.

Swati et al. [13] suggested a fine-tuning of the transfer learning block-wise technique for a pre-trained CNN model. On the T1-weighted contrast-enhanced MRI benchmark dataset, the devised technique is tested. The developed technique has the higher efficiency in brain tumor classification under five-fold cross validation. The developed method has the higher efficiency compared to the existing method and CNN model in brain tumor classification. The developed method is need to be evaluated on the normal MRI data to analysis the performance.

Kaur and Gandhi [14] developed a number of CNN models with transfer learning for brain tumor classification. The performance of the brain tumor categorization was tested using several CNN designs. The brain tumor benchmark dataset was utilized to assess the model's performance. The pre-trained AlexNet with transfer learning outperforms the others in the analysis. The developed CNN-based method's effectiveness in brain tumor categorization must be increased.

Huang et al. [15] suggested a CNN-based approach for MRI brain tumor classification using a modified activation function. The randomly generated graph technique is used to optimize the model's network structure. A network generator is used to map the generated graph into the neural network model. The proposed CNN based model has the higher performance compare to another existing model. The developed method has lower test loss compared to ResNet, DenseNet, and MobileNet models. The overfitting of the model is need to be reduced to increases the performance of the brain tumor classification.

Cane juice samples were subjected to centrifugation using a Remi R-8 C batch-type laboratory model. This was operated at 6000 rpm, attaining 2000g at the bottle tip. For every run it was set for 5 minutes. Optimization of the centrifuge operation is a function of design and so was not carried out. Only the various effective parameters due to centrifugation of cane juice have been observed in the present study. Purity measurement of cane juice was done using a Sucromat in a conventional way. A Brookfield RVT viscometer was used to measure the apparent viscosity difference at 50 rpm using spindle No. 1. The ICUMSA color measurement was done using TEA-buffer and membrane filter as described elsewhere. The color measurements were carried out on an ELICO spectrophotometer.

3. Proposed Method

Automatic brain tumor classification for clinical applications aids in treatment selection. The enhanced CNN method was developed in this study to improve the performance of brain tumor classification. The enhanced CNN method is based on U-Net, RefineNet and SegNet. The brain tumor benchmark dataset was applied to analysis the efficiency of enhanced CNN method. The U-Net method is used for the segmentation process based on local and context information. The RefineNet method is used for the pattern analysis and SegNet method is used for the brain tumor classification. The overall block diagram of the enhanced CNN method is shown in Figure 1.

Figure 1. The block diagram of enhanced CNN method

3.1 CNN based model

In this research, three different Fully Connected Networks (FCN) are applied for the brain tumor classification from MRI images. For brain tumor classification, three networks were used: U-Net, RefineNet, and SegNet. The U-Net method is used for the segmentation, RefineNet method is used for pattern analysis and SegNet method is used for classification. In this proposed framework, initially brain MR Images are applied as the input for U-Net architecture. U-Net architecture produces segmented images, these segmented images are considered as the input for RefineNet. RefineNet produces extracted image patterns, these are fed to SegNet architecture for brain tumor classification. SegNet architecture classifies tumor as Glioma or Meningioma or Pituitary. Networks are selected based on various architectural functionalities and evaluate the networks in brain tumor classification.

3.2 U-Net method

The first architecture used in this research is U-Net [16] and this is applied for the brain tumor segmentation. Encoding and decoding part are present in U-Net architecture. U-Net architecture is shown in Figure 2. Two convolution layers units are present in the encoding architecture. In encoding architecture, a 2×2 pooling (down-sampling) and rectification layer (ReLU) with stride are present. The feature channels are doubled at each down-sampling step. A 2×2 up-convolution layers are present in corresponding decoding architecture that reduces the feature channels halve. A ReLU with two 3×3 convolutions, cropped feature map with a concatenation operator from encoding unit are also present in decoding layer. Finally, component feature vectors are mapped based on a 1×1 convolution for a segmentation. The soft-max energy function is computed and merged with the cross-entropy loss function based on the final feature map. At each position, the soft-max deviation $\left(M_{\lambda(x)}(x)\right)$ from one is used to penalizes the cross-entropy, as shown in Eq. (1).

$\varepsilon=\sum_{k^{\prime}=1} \log \left(M_{\lambda(x)}(x)\right)$         (1)

Where, each pixel true label is denoted as $\lambda: \Omega \rightarrow\{1, \ldots, K\}$ at the position $x \in \Omega$, with $\Omega \subset Z^{2}$. The final segmentation is generated by networks soft-max layer as a probability map. specific pixel value indicates whether or not the pixel belongs to the tumor. Context information is propagated to suitable resolution layers by a high number of selected channels in the network, enabling for end-to-end training with a small amount of training data. The Keras library in Tensor Flow framework is used to implement this network.

Figure 2. U-Net architecture

3.3 RefineNet method

The RefineNet [17] is the second network architecture used in this research to extract the brain tumor pattern. It takes the segmented images as input and produce extracted patterns as output. RefineNet is a filtering network that uses a four-cascaded structure with RefineNet units in each structure. Each unit are connected to previous RefineNet block in the cascade and also connected to one Residual net output [18] block. Each RefineNet is made up of two Residual Convolution Units (RCU), which combine the high-resolution feature map and the outputs. The multi-path refinement architectural style explicitly exploits the data during the down-sampling process to achieve long-range residual connections for high-resolution prediction. Earlier convolutions fine-grained features are used to directly refine the deeper layer, which capture high-level semantic features. A chained residual pooling mechanism is used in the network to effectively capture rich background context.

3.4 SegNet method

In this method, a fully convolutional encoder-decoder network called SegNet is used to classify brain tumors [19]. To segment the brain tumor, the soft max layer is redesigned. The entire network architecture is based on a network of encoders and decoders. The network architecture of the encoder is made up of four stocks. A pooling layer with a 2 x 2 kernel size, stride 2, a ReLU layer, a batch normalization layer, and a convolutional layer make up each block. The decode layer is similar to encoder blocks and contains the up-sampling layer. For smooth labelling in the network and to provide a wide context, the convolution kernel size is set as 7×7 in the network. The “indices pooling” is applied in this network. There are several levels of sub-sampling and max-pooling that can be utilized to offer additional translation for effective classification, but feature maps lose spatial resolution. Encoder feature map store the boundary information before performing sub-sampling. The max-pooling indices are stored in the network for memory efficient. Each encoder feature map is saved with the maximum feature location value in each pooling window. The max-pooling indices that have been memorized are used to perform up-samples in the decoder network for input feature maps from corresponding encoder feature maps. The final classification is performed by a soft max layer in the decoder network. This network is built using the Caffe deep learning framework.

4. Experimental Design

Automatic brain tumor classification is the important process to select treatment for the patients. Various CNN architectures have been applied for the brain tumor classification and shows lower efficiency. An updated CNN model is proposed in this paper to improve brain tumor classification accuracy. This section describes the dataset, metrics, parameter settings, and system requirements for analyzing the performance of the enhanced CNN model.

Dataset Used: The brain tumor benchmark dataset [20] is made up of 3064 MRI images from 233 patients. The image is 512512 pixels in size, with a 1 mm gap between each slice and a 6 mm slice thickness. The dataset contains three types of images: meningioma, glioma, and pituitary tumor. The Meningioma dataset contains 708 images, the Glioma dataset contains 1426 images, and the Pituitary tumor dataset contains 930 images. Figure 3 depicts sample images of meningioma, glioma, and pituitary tumor.

Figure 3. The brain tumor benchmark dataset samples (a) Meningioma, (b) Glioma and (c) Pituitary tumor

Metrics: The proposed enhanced CNN model's performance was evaluated using Accuracy, Sensitivity, and Specificity. The formulas for precision, sensitivity, and specificity are provided below. The True Positive is denoted as TP, True Negative is denoted as TN, False Positive is denoted as FP, and False Negative is denoted as FN.

Accuracy $=\frac{T P+T N}{T P+T N+F P+F N}$

Sensitivity $=\frac{T P}{T P+F N}$

Specificity $=\frac{T N}{T N+F P}$

Parameter Settings: The learning rate of U-net is set as 0.08, RefineNet is set as 0.0001 and SegNet is set as 0.003. The momentum of U-net is set as 0.9 and SegNet is set as 0.01. The weight decay of U-Net is set as 0.0005, RefineNet is set as 0.1 and SegNet is set as 0.000001.

System Requirement: The proposed enhanced CNN model is implemented in MATLAB 2020a with system configuration of Intel i7 processor with 16 GB of RAM.

5. Experimental Results

Brain tumor classification is an important step in determining the type of tumor and determining the best treatment. Several CNN-based architectures have been proposed for brain tumor classification, with the limitations of overfitting and lower efficiency. The enhanced CNN method was proposed in this work to improve the efficiency of brain tumor classification. The brain tumor benchmark dataset was used to assess the enhanced CNN method's performance in brain tumor classification.

The accuracy of enhanced CNN model is measured and compared with existing method, as shown in Figure 4. The analysis shows that the enhanced CNN model has the higher efficiency compared to existing methods. The U-Net architecture of enhanced CNN provides the good segmentation based on local information and contextual information. The SegNet architecture has the advantage of retaining important features in the segmented images and also reduces number of trainable parameters in the decoders. The proposed enhanced-CNN method has the accuracy of 96.85% and the existing CNN with GA method has 96.13% accuracy. The overfitting problem in CNN based model [11] and CNN with transfer learning [13] affects the performance of brain tumor classification.

Figure 4. Accuracy of enhanced CNN method

The sensitivity of enhanced CNN model is measured in brain tumor benchmark dataset and compared with existing method, as shown in Figure 5. The analysis shows that the enhanced CNN model has higher sensitivity compared with existing method. The enhanced CNN model uses U-Net for the segmentation based on local and contextual information. The RefineNet is used to analysis the pattern of brain tumor and SegNet is used for classification of the brain tumor. The SegNet method has the advantage of analysis the important features in the segmented image and also reduces trainable parameter. The proposed enhanced-CNN model has the sensitivity of 95.12% and existing CNN with transfer learning has 94.25% sensitivity.

Figure 5. The sensitivity of enhanced CNN model

Table 1. Comparison analysis of enhanced CNN model

Methods

Accuracy

Sensitivity

Specificity

CNN based model [11]

94.58

88.41

96.12

CNN with GA [12]

96.13

94.2

97.1

CNN Transfer Learning [13]

94.82

94.25

94.71

Enhanced CNN Model

96.85

95.12

97.14

Figure 6. The specificity of enhanced CNN model

As shown in Figure 6, the specificity of the enhanced CNN model is measured in a brain tumor benchmark dataset and compared to existing methods. When compared to existing methods, the enhanced CNN model has a higher specificity. For brain tumor segmentation, the enhanced CNN model U-Net architecture employs local and contextual information. The SegNet architecture has the advantage of analyzing key features in segmented images while reducing the number of trainable parameters. The CNN with GA method has the second higher specificity value in the brain tumor classification. The proposed enhanced CNN model has the specificity of 97.14% and existing 97.1% specificity.

The enhanced CNN model accuracy is measured for 3 classes in brain tumor benchmark dataset, as shown in Figure 7. The proposed enhanced CNN model has the higher accuracy in 3 classes compared to existing methods. The enhanced CNN model has the accuracy of 96.6% in Meningioma class and 96.7% in Glioma class. The CNN based model has 96.14% accuracy in Meningioma class and 94.05% accuracy in Glioma class.

Figure 7. The accuracy of enhanced CNN model

The proposed enhanced CNN model's accuracy, sensitivity, and specificity are measured and compared to existing methods, as shown in Table 1. The enhanced CNN model suggested here exceeds existing approaches in terms of accuracy, sensitivity, and specificity. The enhanced CNN model has a 96.85 percent accuracy, while the existing CNN with transfer learning has a 94.82 percent accuracy. The enhanced CNN model has the advantage of segmenting and analyzing the important features for brain tumor classification using local and context information.

Table 2. Accuracy of enhanced CNN model for various classes

Methods

Class

Accuracy

CNN based model [11]

Glioma

94.05

Meningioma

96.14

Pituitary

93.21

CNN with GA [12]

Glioma

96.5

Meningioma

94.5

Pituitary

97.4

Enhanced CNN Model

Glioma

96.7

Meningioma

96.26

Pituitary

97.6

The proposed enhanced CNN model accuracy is measured for 3 classes in datasets and compared with existing methods, as shown in Table 2. The enhanced CNN model has the higher accuracy compared to existing methods. The enhanced CNN model has the advantage of using local and context information for segmentation and important features for the brain tumor segmentation. The enhanced CNN model has the accuracy of 97.6% in Pituitary class and existing CNN based model has 93.21% accuracy.

6. Conclusions

Automatic brain tumor classification is important process for the doctor to select the treatment. Various CNN-based models for brain tumor classification have been developed, but present techniques suffer from the overfitting problem. The enhanced CNN model based on U-Net, RefineNet, and SegNet has been proposed in this study. The brain tumor benchmark dataset was used to analysis the efficiency of the enhanced CNN model. The enhanced CNN U-Net method provides good segmentation based on the local and context information. The enhanced CNN method has higher performance compared to existing methods. The enhanced CNN method has the accuracy of 96.85% and existing CNN based model has 94.58% accuracy.

  References

[1] Gumaei, A., Hassan, M.M., Hassan, M.R., Alelaiwi, A., Fortino, G. (2019). A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access, 7: 36266-36273. https://doi.org/10.1109/ACCESS.2019.2904145

[2] Amin, J., Sharif, M., Gul, N., Yasmin, M., Shad, S.A. (2020). Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognition Letters, 129: 115-122. https://doi.org/10.1016/j.patrec.2019.11.016

[3] Ghassemi, N., Shoeibi, A., Rouhani, M. (2020). Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomedical Signal Processing and Control, 57: 101678. http://dx.doi.org/10.1016/j.bspc.2019.101678

[4] Yin, B., Wang, C., Abza, F. (2020). New brain tumor classification method based on an improved version of whale optimization algorithm. Biomedical Signal Processing and Control, 56: 101728. http://dx.doi.org/10.1016/j.bspc.2019.101728

[5] Tandel, G.S., Balestrieri, A., Jujaray, T., Khanna, N.N., Saba, L., Suri, J.S. (2020). Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm. Computers in Biology and Medicine, 122: 103804. https://doi.org/10.1016/j.compbiomed.2020.103804

[6] Sultan, H.H., Salem, N.M., Al-Atabany, W. (2019). Multi-classification of brain tumor images using deep neural network. IEEE Access, 7: 69215-69225. https://doi.org/10.1109/ACCESS.2019.2919122

[7] Iqbal, S., Khan, M.U.G., Saba, T., Rehman, A. (2018). Computer-assisted brain tumor type discrimination using magnetic resonance imaging features. Biomedical Engineering Letters, 8(1): 5-28. https://dx.doi.org/10.1007%2Fs13534-017-0050-3

[8] Deepak, S., Ameer, P.M. (2019). Brain tumor classification using deep CNN features via transfer learning. Computers in Biology and Medicine, 111: 103345. https://doi.org/10.1016/j.compbiomed.2019.103345

[9] Ismael, S.A.A., Mohammed, A., Hefny, H. (2020). An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artificial Intelligence in Medicine, 102: 101779. https://doi.org/10.1016/j.artmed.2019.101779

[10] Sharif, M.I., Li, J.P., Khan, M.A., Saleem, M.A. (2020). Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognition Letters, 129: 181-189. https://doi.org/10.1016/j.patrec.2019.11.019

[11] Sajjad, M., Khan, S., Muhammad, K., Wu, W., Ullah, A., Baik, S.W. (2019). Multi-grade brain tumor classification using deep CNN with extensive data augmentation. Journal of Computational Science, 30: 174-182. https://doi.org/10.1016/j.jocs.2018.12.003

[12] Anaraki, A.K., Ayati, M., Kazemi, F. (2019). Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybernetics and Biomedical Engineering, 39(1): 63-74. http://dx.doi.org/10.1016/j.bbe.2018.10.004

[13] Swati, Z.N.K., Zhao, Q., Kabir, M., Ali, F., Ali, Z., Ahmed, S., Lu, J. (2019). Brain tumor classification for MR images using transfer learning and fine-tuning. Computerized Medical Imaging and Graphics, 75: 34-46. https://doi.org/10.1016/j. compmedimag.2019.05.001

[14] Kaur, T., Gandhi, T.K. (2020). Deep convolutional neural networks with transfer learning for automated brain image classification. Machine Vision and Applications, 31: 1-16. https://doi.org/10.18280/ts.370407

[15] Huang, Z., Du, X., Chen, L., Li, Y., Liu, M., Chou, Y., Jin, L. (2020). Convolutional neural network based on complex networks for brain tumor image classification with a modified activation function. IEEE Access, 8: 89281-89290. http://dx.doi.org/10.1109/ACCESS.2020.2993618

[16] Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. https://doi.org/10.1007/978-3-319-24574-4_28

[17] Lin, G., Milan, A., Shen, C., Reid, I. (2017). RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1925-1934. https://arxiv.org/abs/1611.06612v3

[18] Lin, G., Liu, F., Milan, A., Shen, C., Reid, I. (2019). RefineNet: Multi-path refinement networks for dense prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(5): 1228-1242. https://doi.org/10.1109/tpami.2019.2893630

[19] Kendall, A., Badrinarayanan, V., Cipolla, R. (2015). Bayesian SegNet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680. https://arxiv.org/abs/1511.02680v2

[20] Cheng, J., Wei, H., Cao, S., Ru, Y., Wei, Y., Yun, Z., Wang, Z.J., Feng, Q. (2015). Correction: enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE, 10(12): e0144479. https://doi.org/10.1371/journal.pone.0144479