[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113538409A - Cervical cancer image region segmentation method based on fuzzy logic and ANFIS - Google Patents

Cervical cancer image region segmentation method based on fuzzy logic and ANFIS Download PDF

Info

Publication number
CN113538409A
CN113538409A CN202110897279.8A CN202110897279A CN113538409A CN 113538409 A CN113538409 A CN 113538409A CN 202110897279 A CN202110897279 A CN 202110897279A CN 113538409 A CN113538409 A CN 113538409A
Authority
CN
China
Prior art keywords
image
cervical
layer
anfis
fuzzy logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110897279.8A
Other languages
Chinese (zh)
Other versions
CN113538409B (en
Inventor
史庆武
殷守林
陈珏晓
李航
滕琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Normal University
Jiamusi University
Original Assignee
Shenyang Normal University
Jiamusi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Normal University, Jiamusi University filed Critical Shenyang Normal University
Priority to CN202110897279.8A priority Critical patent/CN113538409B/en
Publication of CN113538409A publication Critical patent/CN113538409A/en
Application granted granted Critical
Publication of CN113538409B publication Critical patent/CN113538409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cervical cancer image region segmentation method based on fuzzy logic and ANFIS, belongs to the field of image processing, and aims to solve the problems of low segmentation rate, high time consumption and more training parameters of the conventional cervical cancer image segmentation method. The invention uses fuzzy logic to carry out two times of iterative detection on the edge and the margin, and carries out pixel-level image fusion on the detected margin. And detecting the tumor region by adopting a fuzzy rule edge detection method. And then carrying out annular symmetrical Gabor transformation on the fused cervical image, extracting texture features from the image after annular symmetrical Gabor transformation, and classifying the texture features by adopting an ANFIS classification method. Further, morphological operations are employed for segmentation classification of tumor regions in the abnormal cervical image.

Description

Cervical cancer image region segmentation method based on fuzzy logic and ANFIS
Technical Field
The invention relates to an image segmentation technology, and belongs to the field of image processing.
Background
Cervical cancer is one of the most fatal cancers in developing countries. In developing countries, the mortality rate of cervical cancer is high due to lack of awareness of cervical cancer. This type of cancer can be cured if it is found early, by detecting and removing the cancer area in the cervical cancer area. Human Papillomavirus (HPV) affects the cervical region of female patients, which is the major cause of cervical cancer. This HPV virus affects and damages squamous cells and glandular cells of the cervical region in women. Cytology and digital colposcopy are currently methods used to screen cancer regions in captured images of the cervix.
Cervical cancer cells develop slowly, and early symptoms often show cervical pain, vomiting and other symptoms. These are early symptoms of mild cervical cancer, and bleeding occurs in severe cervical cancer. This cancer can be screened at an early stage using computer-assisted methods. Current cervical cancer screening procedures follow the procedures of cytology, colposcopy, and biopsy. In these cancer screening procedures, cytology and microscopy are image-based screening techniques, and biopsy is a cell-based screening method. The colposcopic screening process is more cost effective than the cytological process, which uses machine learning algorithms, such as neural networks, Support Vector Machines (SVMs), to detect and segment cancer regions in cervical images. Fig. 1(a) is a cervical image of a normal case, and fig. 1(B) is a cervical image of an abnormal case.
The severity of cervical cancer can be classified into stages I-IV according to the degree of influence of cells on the surrounding environment. In stage I, the cells of the cervical region are affected, also known as the mild stage. In phase II, cells located outside the cervical region are affected, also referred to as intermediate stage. In stage III, cancer cells spread in the pelvic and vaginal regions. In stage IV, cancer cells spread to the bladder or rectal region with bleeding. Stages III and IV are referred to as severe stages. Death occurs at stage IV.
Currently, many scholars have studied cervical cancer images, and Karthiga Jaya et al have developed an algorithm for detecting cancer regions in cervical images. The authors used texture features and ANFIS classification algorithms to distinguish cancerous and non-cancerous regions in cervical cancer images. In order to verify the segmentation result of the tumor region, the patent performs a test of the cervical image on the proposed algorithm. Miao Wu et al used a Convolutional Neural Network (CNN) classification framework to detect and segment tumor regions in cervical cancer images. The CNN architecture employs 5 convolutional layers, each using 256 filters. The pooling layer uses a maximum pooling algorithm of 3 x 3 filter sizes. In the CNN architecture, Dropout technology is adopted at an output layer, so that the overfitting problem of the traditional neural network is solved. Using the CNN architecture to classify cervical images, an overall classification accuracy of 89.48% was obtained. Zhang et al uses a deep learning algorithm to screen cancer cells in cervical images, overcoming the simulation of the traditional machine learning algorithm. The authors used the CNN classification algorithm to detect and segment cancer cells using a minimized feature set. The CNN classification algorithm employs a maximum pooling method with 16 convolutional layers. 95.1% sensitivity, 98.3% specificity and 98.3% accuracy were obtained.
Jagtap et al used asymmetric distribution parameters to screen tumor regions in cervical images. Various regions of the cervical image are first correlated and then statistical features are extracted from the correlated images. For each region, the moment of the relevant legal was calculated and then the regions were classified, achieving 94.8% sensitivity, 97.1% specificity and 96.5% accuracy. Bergmeir et al denoise the cervical image using a median filtering algorithm and then detect edges in the image using a canny edge detector. And carrying out random Hough transformation on the detected edge of the cervical vertebra image. And (3) detecting the tumor region in the cervical cancer image by adopting an elastic segmentation algorithm and then adopting a level set algorithm on the image after the Hough transformation. Their proposed method was tested on 207 real-time cervical images. Sulaiimana et al devised a cervical cancer screening method using color characteristics. A semi-automated method has been developed for classifying images of cervical cancer as cancerous or non-cancerous images.
The above technology has the problems of low segmentation rate, large time consumption and more training parameters.
Disclosure of Invention
The invention aims to solve the problems of low segmentation rate, large time consumption and more training parameters of the conventional cervical image segmentation method, and provides a cervical image region segmentation method based on Fuzzy logic and ANFIS (Adaptive Network-based Fuzzy Inference System), wherein the ANFIS is an abbreviation of an Adaptive Network-based Fuzzy Inference System.
The invention discloses a cervical cancer image region segmentation method based on fuzzy logic and ANFIS, which comprises the following steps:
s1, edge detection is carried out on the cervical image by using fuzzy logic, and pixel-level image fusion is carried out on the detected edges;
s2, performing annular symmetrical Gabor transformation on the fused cervical image;
s3, extracting texture features from the Gabor transformed image;
s4, classifying the textural features extracted in the S3 by adopting an adaptive neural fuzzy system ANFIS classification method, wherein the output categories comprise a normal cervical image and an abnormal cervical image;
and S5, segmenting the tumor region in the abnormal cervical image by using morphological operation.
Preferably, the process of edge detection on the cervical image by using fuzzy logic and pixel-level image fusion on the detected edges in S1 is as follows:
s1-1, acquiring a fuzzy logic threshold, specifically:
firstly, converting an RGB cervical image into a gray image, then calculating a histogram of the gray image, and taking the average value of the calculated histogram mode as a threshold value;
s1-2, traversing the cervical image by using a2 x 2 mask window, iteratively detecting edge features in the image, and generating a cervical image with preliminary edge features;
s1-3, traversing again by using a2 x 2 mask window on the cervical image of the preliminary edge feature generated in the S1-2, and generating a cervical image of the further refined edge feature in the process of second repeated iteration;
s1-4, fusing the edge features of S1-2 and S1-3 twice together using a pixel level image fusion technique to generate an enhanced cervical image.
Preferably, the process of extracting texture features from the Gabor-transformed image in S3 is as follows:
extracting texture features from the Gabor-transformed image by adopting a local derivative mode LDP, discrete wavelet transform DWT, gray level co-occurrence matrix method GLCM and Laws energy method respectively, generating respective texture feature mapping maps, and then overlapping the four texture feature mapping maps to obtain the final texture features.
Preferably, the process of classifying the texture features extracted in S3 by using the adaptive neuro-fuzzy system ANFIS classification method in S4 is as follows:
the adaptive neuro-fuzzy system ANFIS sets 5 internal layers;
the first layer includes four fuzzy sets a1, a2, B1, B2, and the adaptive nodes are:
Figure BDA0003198364440000031
wherein a isi,biAnd ciFor the precondition of the ith adaptive node in the first layerThe number x and y are two input parameters;
the second layer includes two fixed nodes, and the output of the layer is generated by executing a multiplication function, wherein the output function of the layer is:
layer2 out=μAi(x)·μBi(y);i=1,2;
the third layer comprises two fixed nodes, generates an output according to a given equation, following the triggering strength, and has the output function:
Figure BDA0003198364440000032
in the formula wiIs the output of the second layer;
the fourth layer comprises two adaptive nodes, and the layer output function is:
Figure BDA0003198364440000033
wherein p isi,qi,riIs the trigger strength parameter of the ith adaptive node of the fourth layer;
the fifth layer is an output layer, and the output function of the layer is as follows:
Figure BDA0003198364440000041
wherein f isiIs the weight corresponding to the fourth layer corresponding to the adaptive node,
and f is a high value, and indicates that the detected cervical image belongs to the abnormal cervical image category.
Preferably, the process of segmenting the tumor region in the abnormal cervical image by using the morphological operation in S5 is as follows:
processing the abnormal cervical image by using the expansion function to generate an expansion image;
processing the abnormal cervical image by using an erosion function to generate a corrosion image;
the erosion image is subtracted from the dilated image to segment the tumor region.
The invention has the beneficial effects that: the invention provides a cervical image tumor region detection and segmentation method based on fuzzy logic and adaptive neuro-fuzzy inference system classification. And detecting the thick edge and the thin edge by using fuzzy logic, and performing pixel-level image fusion on the detected edges. And detecting the tumor region by adopting a fuzzy rule edge detection method. And then carrying out annular symmetrical Gabor transformation on the fused cervical image, extracting texture features from the image after annular symmetrical Gabor transformation, and classifying the texture features by adopting an ANFIS classification method. Further, morphological operations are employed for segmentation classification of tumor regions in abnormal cervical images. Experiments prove that the provided method effectively improves the segmentation accuracy.
Drawings
Fig. 1 is a cervical image, wherein fig. 1(a) is a normal cervical image; FIG. 1(B) is an abnormal cervical image;
FIG. 2 is a flow chart of the method of the present invention;
fig. 3 is an image of the cervix after a circularly symmetric Gabor transform;
FIG. 4 shows the subband coefficients after DWT decomposition, where FIG. 4(A) is the low frequency subband, FIG. 4(B) is the horizontal high frequency subband, FIG. 4(C) is the vertical high frequency subband, and FIG. 4(D) is the diagonal high frequency subband;
FIG. 5 is a block diagram of an ANFIS architecture;
fig. 6 is a tumor region segmentation, wherein fig. 6(a) is a dilated image segmenting a cancer region in a cervical image, and fig. 6(B) is a tumor region segmented by a cervical image;
fig. 7 is a comparison of the segmentation results obtained by the method of the present invention, in which fig. 7(a) is an abnormal cervical image, fig. 7(B) is a graph of the segmentation results obtained by the method of the present invention, and fig. 7(C) is a Radiology expert labeled group route image.
Detailed Description
The first embodiment is as follows: the present embodiment will be described below with reference to fig. 1 to 7, and in the present invention, a guanacast data set is used. The National Cancer Institute (NCI) established this data set to analyze the performance of different cervical cancer detection methods on this data set. NCI collected cervical cancer images from the guanacaster project, which screened 10000 women. Cervical images were collected from various anonymous patients and then labeled by a radiologist. In this dataset, multimodal information is available for each acquired cervical image. The training dataset consisted of 75 normal cervical images and 85 abnormal cervical images, and the test dataset consisted of 76 normal cervical images and 118 abnormal cervical images.
Referring to fig. 2, the cervical cancer image region segmentation method based on fuzzy logic and ANFIS according to the present embodiment includes the following steps:
s1, edge detection is carried out on the cervical image by using fuzzy logic, and pixel-level image fusion is carried out on the detected edges;
s2, performing annular symmetrical Gabor transformation on the fused cervical image;
s3, extracting texture features from the Gabor transformed image;
s4, classifying the textural features extracted in the S3 by adopting an adaptive neural fuzzy system ANFIS classification method, wherein the output categories comprise a normal cervical image and an abnormal cervical image;
and S5, segmenting the tumor region in the abnormal cervical image by using morphological operation.
In S1, the process of edge detection of the cervical image by using fuzzy logic and pixel-level image fusion of the detected edges includes:
s1-1, acquiring a fuzzy logic threshold, specifically:
firstly, converting an RGB cervical image into a gray image, then calculating a histogram of the gray image, and taking the average value of the calculated histogram mode as a threshold value;
s1-2, traversing the cervical image by using a2 x 2 mask window, iteratively detecting edge features in the image, and generating a cervical image with preliminary edge features;
s1-3, traversing again by using a2 x 2 mask window on the cervical image of the preliminary edge feature generated in the S1-2, and generating a cervical image of the further refined edge feature in the process of second repeated iteration;
s1-4, fusing the edge features of S1-2 and S1-3 twice together using a pixel level image fusion technique to generate an enhanced cervical image.
Edge detection is an important tool for tumor region detection and segmentation. An edge is an abrupt change of each pixel relative to surrounding pixels. The invention adopts fuzzy logic to detect the thick and thin edges in the cervical image. The RGB cervical image is first converted to a grayscale image. A histogram is calculated and the average of the calculated histogram patterns is set as a threshold. Pixels greater than or equal to the threshold are set to black (value 0), and pixels less than the threshold are set to white (value 1). Fuzzy logic is applied to the binary image to detect the change of the pixel between black and white. A2 x 2 mask window was placed over the cervical image and the following 16 blurring rules (2)nRule-n is a line of a mask window) for the pixels in this 2 x 2 mask window. The fuzzy logic system has two input parameters (pixel 0 is black and pixel 1 is white) and three output parameters (black, white, edge). And selecting input and output parameters by adopting a triangular membership function. Table 1 shows 16 blurring rules for cervical image edge detection. Assume that the pixels in the 2 × 2 mask window are P1, P2, P3, and P4. The output response is black (binary 0), white (binary 1), or edge. If all of the pixel mask windows in 2 x 2 have a value of 0, then the responding P4 pixel is output judged to be black (binary 0). If the window of all pixels in the mask 2 x 2 has a value of 1, then the P4 pixel of the output response is judged to be white (binary 1). If there is any change between P4 and its nearby pixels (P1, P2, and P3), then this P4 pixel is judged to be an "edge".
TABLE 1 edge detection based on fuzzy criteria
Figure BDA0003198364440000061
The first iterative edge detection process produces a cervical image of preliminary edge features and the second iterative process produces a cervical image of refined edge features. The edges are fused twice using pixel-level image fusion techniques, resulting in an enhanced cervical image.
S2, performing annular symmetrical Gabor transformation on the fused cervical image:
the conventional Fourier Transform (FT) is a transform of spatial pixels into frequency pixels, each without temporal information. This limitation is overcome by using a circularly symmetric Gabor transform. The non-linear irregularity of the cervical image after the circularly symmetric Gabor transform is low. With this circularly symmetric Gabor transform, the spatial relationship between the transformed pixels and the source image pixels is linear.
And performing spatial movement on the pixels through annular symmetrical Gabor transformation, and transforming the spatial pixels into multi-parameter pixels according to the movement behaviors of the spatial pixels. The kernel function of the circularly symmetric Gabor transform is shown as follows:
Figure BDA0003198364440000071
wherein f is the frequency of the transformed plane, theta is the phase of the Gaussian kernel, the value range is 0-180 degrees, Gamma is the scale index set to be 0.5 by the invention, and tau is a Gamma factor and has a unique value. The template can cover the main information of the window function, and the efficiency of feature extraction is improved.
The pixel scaling parameters for the x-axis and y-axis are:
xr=x·cosθ+y·sinθ
yr=-x·sinθ+y·cosθ
the fused cervical image pixel is represented by (x, y), and the scale factor thereof is represented by τ.
Fig. 3 shows a circularly symmetric Gabor transformed cervical image, with each pixel representing multiple behaviors in space, frequency and direction.
The process of extracting texture features from the Gabor-transformed image in S3 is as follows:
extracting texture features from the Gabor-transformed image by adopting a local derivative mode LDP, discrete wavelet transform DWT, gray level co-occurrence matrix method GLCM and Laws energy method respectively, generating respective texture feature mapping maps, and then overlapping the four texture feature mapping maps to obtain the final texture features.
The cervical image after Gabor transformation is decomposed by adopting a Laws energy method, and the image is decomposed into four sub-bands, namely a low-frequency sub-band and a high-frequency sub-band. The approximation subband belongs to a low frequency subband, and the horizontal subband, the vertical subband, and the diagonal subband belong to a high frequency subband. And taking the coefficient of each sub-band as a decomposed characteristic set, and carrying out classification processing on the cervical image. Fig. 4(a) shows low-frequency subbands, and fig. 4(B) to (D) show high-frequency subbands, which are horizontal subbands, vertical subbands, and diagonal subbands, respectively.
The process of classifying the texture features extracted in the step S3 by adopting an adaptive neural fuzzy system ANFIS classification method in the step S4 is as follows:
the adaptive neuro-fuzzy system ANFIS sets 5 internal layers;
the first layer includes four fuzzy sets a1, a2, B1, B2, and the adaptive nodes are:
Figure BDA0003198364440000081
wherein a isi,biAnd ciThe method comprises the following steps that (1) a precondition parameter of the ith self-adaptive node in a first layer is obtained, and x and y are two input parameters;
the second layer includes two fixed nodes, and the output of the layer is generated by executing a multiplication function, wherein the output function of the layer is:
layer2 out=μAi(x)·μBi(y);i=1,2;
the third layer comprises two fixed nodes, generates an output according to a given equation, following the triggering strength, and has the output function:
Figure BDA0003198364440000082
in the formula wiIs the output of the second layer;
the fourth layer comprises two adaptive nodes, and the layer output function is:
Figure BDA0003198364440000083
wherein p isi,qi,riIs the trigger strength parameter of the ith adaptive node of the fourth layer;
the fifth layer is an output layer, and the output function of the layer is as follows:
Figure BDA0003198364440000084
wherein f isiIs the weight corresponding to the fourth layer corresponding to the adaptive node,
and f is a high value, and indicates that the detected cervical image belongs to the abnormal cervical image category.
The process of segmenting the tumor region in the abnormal cervical image by using the morphological operation in S5 is as follows:
processing the abnormal cervical image by using the expansion function to generate an expansion image;
processing the abnormal cervical image by using an erosion function to generate a corrosion image;
the erosion image is subtracted from the dilated image to segment the tumor region.
The output of the ANFIS architecture is low or high, where low indicates that the detected cervical image belongs to the normal category and high indicates that the detected cervical image belongs to the abnormal category. The morphological method is used for carrying out one-step detection and segmentation on the cancer area in the abnormal cervical image. The dilation function is applied to the cervical classified abnormality image to produce a dilated image. An erosion function is applied to the classified abnormal cervical image, resulting in a corrosion image. The erosion image was subtracted from the dilated image, which segmented the cancer region in the test cervical image (fig. 6A). Fig. 6B is a tumor region segmented from a test cervical image, fig. 7A is a cervical image, fig. 7B is a cervical cancer segmented using the method herein, and fig. 7C is a radiology expert labeled ground trouh image.
The performance of the proposed cervical cancer detection and segmentation method was tested on the guanacaster dataset using 76 normal cervical cancer images and 118 abnormal cervical cancer images. The method correctly classifies 75 images into normal images and 117 images into abnormal images. The classification rate defines the percentage of correctly classified images to the total number of images in each class. Therefore, the classification rate of the normal category reaches 98.6% and the classification rate of the abnormal category reaches 99.1% by adopting the proposed method. The average typing rate of the proposed cervical cancer detection system is about 98.8%.
The following indices were used to evaluate the performance of the proposed cervical cancer detection method:
Sensitivity(Se)=TP/(TP+FN)
Specificity(Sp)=TN/(TN+FP)
Accuracy(Acc)=(TP+TN)/(TP+FN+TN+FP)
TP and TN are true positive and true negative, respectively, and the number of correctly detected cancer pixels and non-cancer pixels in the classified abnormal cervical cancer image, respectively. FP and FN are false positives and false negatives, respectively, and are the number of cancer pixels and non-cancer pixels, respectively, that are erroneously detected in classifying an abnormal cervical image. These parameters are calculated from the abnormal cervical image and the trues image.
The sensitivity defines the number of correctly classified cancer pixels, which varies between 0 and 100. Specificity defines the number of positively classified non-cancer pixels, which varies from 0 to 100. High sensitivity and specificity values show high performance.
TABLE 2 Effect of different feature extraction methods on cervical image segmentation
Figure BDA0003198364440000091
TABLE 3 cervical image segmentation index results
Figure BDA0003198364440000101
TABLE 4 comparative results
Figure BDA0003198364440000102
Table 2 is the impact of extracted features on classification results in the proposed cervical cancer test. Table 3 shows the analysis of the group truth image by the cervical cancer segmentation method proposed by the present invention. The sensitivity of the cervical cancer segmentation method is up to 98.1%, the specificity is up to 99.4%, and the accuracy is up to 99.3%. Table 4 shows the results of comparing the cervical cancer segmentation method provided by the invention with other methods, and the segmentation method provided by the invention achieves the sensitivity of 98.6%, the specificity of 99.5% and the accuracy of 99.7%. .

Claims (5)

1. Cervical cancer image region segmentation method based on fuzzy logic and ANFIS, characterized in that the method comprises the following steps:
s1, edge detection is carried out on the cervical image by using fuzzy logic, and pixel-level image fusion is carried out on the detected edge;
s2, performing annular symmetrical Gabor transformation on the fused cervical image;
s3, extracting texture features from the Gabor transformed image;
s4, classifying the textural features extracted in the S3 by adopting an adaptive neural fuzzy system ANFIS classification method, and outputting two types of images including a normal cervical image and an abnormal cervical image;
and S5, segmenting the tumor region in the abnormal cervical image by using morphological operation.
2. The cervical cancer image region segmentation method based on fuzzy logic and ANFIS as claimed in claim 1, wherein the edge detection of the cervical image by using fuzzy logic and the pixel-level image fusion of the detected edge in S1 are:
s1-1, acquiring a fuzzy logic threshold, specifically:
firstly, converting an RGB cervical image into a gray image, then calculating a histogram of the gray image, and taking the average value of the calculated histogram mode as a threshold value;
s1-2, traversing the cervical image by using a2 x 2 mask window, iteratively detecting edge features in the image, and generating a cervical image with preliminary edge features;
s1-3, traversing again by using a2 x 2 mask window on the cervical image of the preliminary edge feature generated in the S1-2, and generating a cervical image of the further refined edge feature in the process of second repeated iteration;
s1-4, fusing the edge features of S1-2 and S1-3 twice together using a pixel level image fusion technique, thereby generating an enhanced cervical image.
3. The cervical cancer image region segmentation method based on fuzzy logic and ANFIS as claimed in claim 1, wherein the process of extracting texture features from the Gabor transformed image in S3 is as follows:
extracting texture features from the image after Gabor transformation by adopting a local derivative mode LDP, discrete wavelet transformation DWT, a gray level co-occurrence matrix method GLCM and a Laws energy method respectively, generating respective texture feature mapping images, and then overlapping the four texture feature mapping images to obtain the final texture features.
4. The fuzzy logic and ANFIS based cervical cancer image region segmentation method as claimed in claim 1, wherein the classification of the texture features extracted in S3 by the adaptive neural fuzzy system ANFIS classification method in S4 comprises:
the adaptive neuro-fuzzy system ANFIS sets 5 internal layers;
the first layer includes four fuzzy sets a1, a2, B1, B2, and the adaptive nodes are:
Figure FDA0003198364430000021
wherein a isi,biAnd ciThe method comprises the following steps that (1) a precondition parameter of the ith self-adaptive node in a first layer is obtained, and x and y are two input parameters;
the second layer includes two fixed nodes, and the output of the layer is generated by executing a multiplication function, wherein the output function of the layer is:
layer2 out=μAi(x)·μBi(y);i=1,2;
the third layer comprises two fixed nodes, generates an output according to a given equation, following the triggering strength, and has the output function:
Figure FDA0003198364430000022
in the formula wiIs the output of the second layer;
the fourth layer comprises two adaptive nodes, and the layer output function is:
Figure FDA0003198364430000023
wherein p isi,qi,riIs the trigger strength parameter of the ith adaptive node of the fourth layer;
the fifth layer is an output layer, and the output function of the layer is as follows:
Figure FDA0003198364430000024
wherein f isiIs the weight corresponding to the fourth layer corresponding to the adaptive node,
and f is a low value indicating that the detected cervical image belongs to the normal cervical image category, and f is a high value indicating that the detected cervical image belongs to the abnormal cervical image category.
5. The cervical cancer image region segmentation method based on fuzzy logic and ANFIS as claimed in claim 1, wherein the segmentation of the tumor region in the abnormal cervical image by using morphological operations in S5 comprises:
processing the abnormal cervical image by using the expansion function to generate an expansion image;
processing the abnormal cervical image by using an erosion function to generate a corrosion image;
the erosion image is subtracted from the dilated image to segment the tumor region.
CN202110897279.8A 2021-08-05 2021-08-05 Cervical cancer image region segmentation method based on fuzzy logic and ANFIS Active CN113538409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110897279.8A CN113538409B (en) 2021-08-05 2021-08-05 Cervical cancer image region segmentation method based on fuzzy logic and ANFIS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110897279.8A CN113538409B (en) 2021-08-05 2021-08-05 Cervical cancer image region segmentation method based on fuzzy logic and ANFIS

Publications (2)

Publication Number Publication Date
CN113538409A true CN113538409A (en) 2021-10-22
CN113538409B CN113538409B (en) 2023-05-16

Family

ID=78090581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110897279.8A Active CN113538409B (en) 2021-08-05 2021-08-05 Cervical cancer image region segmentation method based on fuzzy logic and ANFIS

Country Status (1)

Country Link
CN (1) CN113538409B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237435A (en) * 2023-11-16 2023-12-15 北京智源人工智能研究院 Tumor prognosis effect evaluation method, device, electronic equipment and storage medium
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050043869A (en) * 2005-04-20 2005-05-11 송희성 Developing a computer aided diagnostic system on breast cancer using adaptive neuro-fuzzy inference system
WO2008005426A2 (en) * 2006-06-30 2008-01-10 University Of South Florida Computer-aided pathological diagnosis system
CN103098090A (en) * 2011-12-21 2013-05-08 中国科学院自动化研究所 Multiparameter three-dimensional magnetic resonance imaging brain tumor partition method
CN105741281A (en) * 2016-01-28 2016-07-06 西安理工大学 Image edge detection method based on neighbourhood dispersion
CN107194933A (en) * 2017-04-24 2017-09-22 天津大学 With reference to convolutional neural networks and the brain tumor dividing method and device of fuzzy reasoning
CN109035227A (en) * 2018-07-13 2018-12-18 哈尔滨理工大学 The system that lung tumors detection and diagnosis is carried out to CT image
CN109493325A (en) * 2018-10-23 2019-03-19 清华大学 Tumor Heterogeneity analysis system based on CT images
CN110946552A (en) * 2019-10-30 2020-04-03 南京航空航天大学 Cervical cancer pre-lesion screening method combining spectrum and image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050043869A (en) * 2005-04-20 2005-05-11 송희성 Developing a computer aided diagnostic system on breast cancer using adaptive neuro-fuzzy inference system
WO2008005426A2 (en) * 2006-06-30 2008-01-10 University Of South Florida Computer-aided pathological diagnosis system
CN103098090A (en) * 2011-12-21 2013-05-08 中国科学院自动化研究所 Multiparameter three-dimensional magnetic resonance imaging brain tumor partition method
CN105741281A (en) * 2016-01-28 2016-07-06 西安理工大学 Image edge detection method based on neighbourhood dispersion
CN107194933A (en) * 2017-04-24 2017-09-22 天津大学 With reference to convolutional neural networks and the brain tumor dividing method and device of fuzzy reasoning
CN109035227A (en) * 2018-07-13 2018-12-18 哈尔滨理工大学 The system that lung tumors detection and diagnosis is carried out to CT image
CN109493325A (en) * 2018-10-23 2019-03-19 清华大学 Tumor Heterogeneity analysis system based on CT images
CN110946552A (en) * 2019-10-30 2020-04-03 南京航空航天大学 Cervical cancer pre-lesion screening method combining spectrum and image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
M. ASHRAF ET AL: "Information Gain and Adaptive Neuro-Fuzzy Inference System for Breast Cancer Diagnoses", 《5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCES AND CONVERGENCE INFORMATION TECHNOLOGY》 *
N. YOUSSRY ET AL: "Early detection of masses in digitized mammograms using texture features and neuro-fuzzy model", 《PROCEEDINGS OF THE TWENTIETH NATIONAL RADIO SCIENCE CONFERENCE (NRSC"2003) (IEEE CAT. NO.03EX665)》 *
刘琳琳 等: "宫颈癌的诊治及预防研究", 《医学信息》 *
程茜: "基于模糊逻辑的边缘检测技术研究", 《扬州大学硕士论文》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237435A (en) * 2023-11-16 2023-12-15 北京智源人工智能研究院 Tumor prognosis effect evaluation method, device, electronic equipment and storage medium
CN117237435B (en) * 2023-11-16 2024-02-06 北京智源人工智能研究院 Tumor prognosis effect evaluation method, device, electronic equipment and storage medium
CN117788472A (en) * 2024-02-27 2024-03-29 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN117788472B (en) * 2024-02-27 2024-05-14 南京航空航天大学 Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm

Also Published As

Publication number Publication date
CN113538409B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
Ahammed et al. A machine learning approach for skin disease detection and classification using image segmentation
Cheng et al. Approaches for automated detection and classification of masses in mammograms
Hu et al. Detection of suspicious lesions by adaptive thresholding based on multiresolution analysis in mammograms
AlMubarak et al. A hybrid deep learning and handcrafted feature approach for cervical cancer digital histology image classification
Zhang et al. Region of interest extraction in remote sensing images by saliency analysis with the normal directional lifting wavelet transform
Huang et al. A hybrid fuzzy clustering approach for the recognition and visualization of MRI images of Parkinson’s disease
Xiao et al. Defocus blur detection based on multiscale SVD fusion in gradient domain
CN113538409B (en) Cervical cancer image region segmentation method based on fuzzy logic and ANFIS
Saravanan et al. RETRACTED ARTICLE: A brain tumor image segmentation technique in image processing using ICA-LDA algorithm with ARHE model
Lin et al. Interventional multi-instance learning with deconfounded instance-level prediction
Sukumar et al. Computer aided detection of cervical cancer using Pap smear images based on hybrid classifier
Singh et al. An approach for classification of malignant and benign microcalcification clusters
Sarangi et al. Mammogram mass segmentation and detection using Legendre neural network-based optimal threshold
Dannemiller et al. A new method for the segmentation of algae images using retinex and support vector machine
Gong et al. Texture based mammogram classification and segmentation
CN116310569A (en) Mammary gland lesion detection and classification device based on robust texture features
Sukumar et al. Computer aided detection and classification of Pap smear cell images using principal component analysis
Ayoub et al. Automatic detection of pigmented network in melanoma dermoscopic images
Moayedi et al. Subclass fuzzy-SVM classifier as an efficient method to enhance the mass detection in mammograms
Thamaraichelvi Modified fuzzy clustering-based segmentation through histogram combined with K-NN classification
Niwas et al. Complex wavelet as nucleus descriptors for automated cancer cytology classifier system using ANN
Govinda et al. Artificial neural networks in UWB image processing for early detection of breast cancer
Saravanan et al. A new framework to classify the cancerous and non-cancerous pap smear images using filtering techniques to improve accuracy
Chang et al. Using fuzzy logic and particle swarm optimization to design a decision-based filter for cDNA microarray image restoration
Barbhuiya et al. Hybrid image segmentation model using KM, FCM, Wavelet KM and Wavelet FCM Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant