[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114332133B - Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net - Google Patents

Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net Download PDF

Info

Publication number
CN114332133B
CN114332133B CN202210009185.7A CN202210009185A CN114332133B CN 114332133 B CN114332133 B CN 114332133B CN 202210009185 A CN202210009185 A CN 202210009185A CN 114332133 B CN114332133 B CN 114332133B
Authority
CN
China
Prior art keywords
multiplied
image
module
convolution
pneumonia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210009185.7A
Other languages
Chinese (zh)
Other versions
CN114332133A (en
Inventor
郑茜颖
邱纯乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202210009185.7A priority Critical patent/CN114332133B/en
Publication of CN114332133A publication Critical patent/CN114332133A/en
Application granted granted Critical
Publication of CN114332133B publication Critical patent/CN114332133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a pneumonia CT image infection area dividing method and system based on an improved CE-Net. And secondly, introducing a feature aggregation module, adopting a bilinear interpolation method, fusing image features of different layers, obtaining expression with more discrimination capability, and further improving the segmentation precision of the network. According to the invention, the characteristics of the pneumonia infection area in the CT image can be better captured on the COVID-19-CT-Scans dataset, the segmentation effect is good, and the segmentation method is remarkably improved on the whole compared with the original CE-Net network and other segmentation algorithms.

Description

Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a pneumonia CT image infection area distinguishing method and system based on improved CE-Net.
Background
Although the existing deep learning algorithm achieves good results in terms of pneumonia image processing, the related work of segmenting the pneumonia infection area in the image is still less, because the following difficulties exist in segmenting the infection area from the two-dimensional CT image: 1) There is a large difference in the location, size, shape of the infection in the different two-dimensional CT images, which often results in false negative detection. 2) The infected area has low contrast with the normal area. 3) The affected area boundaries are often obscured and it is difficult to obtain a very accurate label.
Disclosure of Invention
In order to make up for the blank and the deficiency of the prior art, the invention provides a pneumonia CT image infection area segmentation method and system based on an improved CE-Net, which are used for realizing the improvement of the pneumonia infection area segmentation precision.
Firstly, an attention mechanism SE module is added in the encoding stage to introduce global context information, the receptive field of the feature extraction stage is enhanced, and the weight of a target related feature channel is increased, so that the segmentation capability of a small target is improved. And secondly, introducing a feature aggregation module, adopting a bilinear interpolation method, fusing image features of different layers, obtaining expression with more discrimination capability, and further improving the segmentation precision of the network. According to the invention, the characteristics of the pneumonia infection area in the CT image can be better captured on the COVID-19-CT-Scans dataset, the segmentation effect is good, and the segmentation method is remarkably improved on the whole compared with the original CE-Net network and other segmentation algorithms.
The invention adopts the following technical scheme:
the method for distinguishing the infection areas of the CT image of the pneumonia based on the improved CE-Net is characterized by comprising the following steps:
Step S1: preprocessing data of a data set, carrying out image enhancement on all CT images, finding out the outline of lung parenchyma, and cutting out the part outside the outline;
Step S2: inputting the preprocessed image obtained in the step S1 into a coding part of a network, and extracting basic features of the image through a residual block ResNet and an attention mechanism module SE respectively;
Step S3: inputting the features obtained in the step S2 into a dense hole convolution (Dense Atrous Convolution, DAC) and a Residual multi-core pool (RMP) for capturing more advanced features and retaining more spatial information;
Step S4: inputting the features of different scales obtained in the step S2 into a feature fusion module;
step S5: adding the features obtained in the step S3 and the features fused in the step S4, inputting the added features into a decoder part of a network, and obtaining a segmented result through upsampling and deconvolution;
step S6: and optimizing the image segmentation model through a loss function.
The provided image infection area segmentation model comprises an improvement on the original CE-Net model, and the core comprises a focus mechanism extrusion and Excitation module (SE) and a feature aggregation module (Feature aggregation module, FAM).
Further, in step S1, the contrast ratio of the image is enhanced by adopting a limited contrast ratio adaptive histogram equalization algorithm, so that the affected area is more easily distinguished from the normal area, the outline of the lung parenchyma is found out by a canny algorithm, the part outside the outline is cut, and the influence of the irrelevant part is reduced to the greatest extent.
Further, in step S2, the coding part of the network includes three parts, the first part uses 13×3 convolution to extract the shallow feature F0, the second part uses 4 pre-trained ResNet modules to extract the deep feature, the third part adds an attention mechanism module to introduce global context information after ResNet modules, enhances the receptive field of the feature extraction stage, and increases the weight of the target related feature channel.
Further, in step S2, the residual block ResNet uses the shallow layer feature F0 as an input feature, and performs superposition output on the input and output through two convolution kernels with a size of 3×3 and then through a shortcut; the attention mechanism module SE is divided into two operations: squeezing and excitation, wherein the squeezing operation carries out global average pooling operation on the input characteristic diagram, so that each channel has global information, and the method is expressed as follows by a mathematical formula:
Wherein X is an input feature map, namely the output of each residual block, H, W, C respectively represents the height, width and channel number of the feature map;
The excitation operation stage is used for acquiring the mutual dependency relationship among the channels of the feature map, the operation firstly inputs the extruded vector into a full-connection layer to obtain a vector of 1 multiplied by (C/r), r is a set constant, the vector is activated by using ReLu functions, then the number of the channels is expanded from C/r to C through one full-connection layer, and then the weight coefficient s of the channels is calculated through one Sigmoid function, so that the excitation operation is realized, and the calculation formula is as follows:
s=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z)) (2)
Wherein sigma (·) is a sigmoid activation function, delta (·) is ReLu functions, and w 1,w2 is a convolution kernel of two fully connected layers; and finally multiplying the weight coefficient by the corresponding channel number to obtain a result feature map.
Further, in step S3, the dense-cavity convolution DAC has 4 cascade branches, increasing from 1 to 1,3 and 5 with increasing number of atrous convolutions, the acceptance field of each branch being 3,7,9, 19, applying a 1 x1 convolution correction linear activation at each branch, the DAC block extracting features of objects of different sizes by convolutions of atrous in combination with different atrous rates;
The residual multi-core pool RMP module is provided with 4 receiving domains with different sizes, namely 2 multiplied by 2,3 multiplied by 3,5 multiplied by 5 and 6 multiplied by 6, the four convolution kernels with different sizes obtain 4 different characteristic information, 1 multiplied by 1 convolution is added after each layer of pooling, then the characteristic with the same size as the original characteristic is obtained through linear interpolation, and finally the original characteristic and the characteristic obtained through interpolation are connected.
Further, in step S4, the feature fusion module FAM fuses the convolution blocks with different sizes obtained in the encoding process together by using a bilinear interpolation method, so as to achieve the purpose of feature reuse.
Further, in step S6, the loss function is combined by a cross entropy loss function and a dice coefficient loss function, which is specifically expressed as follows:
wherein Y= { Y1, Y2, & gtis, yb represents a true value of the value, Indicating the prediction probability, N indicates the batch size, sigma (·) corresponds to the sigmoid activation function, and alpha takes a value of 0.5.
An improved CE-Net based pneumonia CT image infection area dividing system is characterized in that: based on the computer system, the adopted image segmentation model comprises: the device comprises an encoding module, a context extraction module and a decoding module;
After being preprocessed, the pneumonia data set is input into the coding module, passes through a convolution kernel with the size of 3 multiplied by 3, then passes through 4 ResNet modules, each ResNet module is extruded and excited through an attention mechanism SE module, and then passes through a dense cavity convolution DAC and a residual multi-core pool RMP of a context extraction module, so as to capture more advanced features and retain more spatial information;
the decoding module consists of an up-sampling layer and a characteristic aggregation module; the up-sampling layer is composed of a deconvolution layer with the size of 3 multiplied by 3 and the step length of 2, the size of the output characteristic diagram is consistent with the size of the characteristic diagram in the corresponding coding process, the jumping connection with the characteristic aggregation module is accessed, finally, the pneumonia infection area and the background image are classified through the Sigmoid activation function, and the segmentation result of the infection area is output.
Compared with the prior art, the method and the device have the advantages that firstly, the attention mechanism SE module is added in the coding process to introduce global context information, so that the model pays attention to relevant features of an infection area better in the learning process, and then, the feature aggregation module is added on the original structure of the Ce-Net, and fully fuses the spatial information of high and low layers to obtain the feature with more discrimination capability, so that a better image segmentation effect is obtained.
Drawings
FIG. 1 is a schematic diagram of a topology structure of a segmentation model of a CT image infection area of pneumonia based on an improved CE-Net according to an embodiment of the present invention;
fig. 2 is a schematic topology diagram of an attention mechanism module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a topology structure of a feature aggregation module according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for segmenting an infection area of a CT image of pneumonia based on an improved CE-Net according to an embodiment of the present invention;
FIG. 5 is a graph showing the comparison results of test images under different algorithms according to an embodiment of the present invention.
Detailed Description
In order to make the features and advantages of the present patent more comprehensible, embodiments accompanied with figures are described in detail below:
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Referring to fig. 1, fig. 1 is a schematic topological structure diagram of a pneumonia CT image segmentation model according to an embodiment of the present invention.
According to the research of the inventor, the existing medical image segmentation method is prone to improving the segmentation capability of a network by extracting image boundaries to obtain higher evaluation indexes, but the existing method hardly considers how to improve the segmentation effect of a small target, so that the embodiment of the invention provides a pneumonia CT image infection area segmentation model based on improved CE-Net for solving the problems.
The embodiment of the invention provides an image segmentation model as shown in fig. 1, wherein the model is composed of 3 stages, an encoding stage, a context extraction module and a decoding stage. The pneumonia data set is preprocessed and then input into a network coding part, and is subjected to convolution kernel with the size of 3×3, then is subjected to extrusion and excitation by the attention mechanism SE module after passing through 4 ResNet modules, and each ResNet module. And then passes through a context extraction module DAC and RMP for capturing more advanced features and preserving more spatial information. The decoding part is composed of an up-sampling layer and a characteristic aggregation module. The up-sampling layer is composed of a deconvolution layer with the size of 3 multiplied by 3 and the step length of 2, and the size of the output characteristic diagram is consistent with the size of the characteristic diagram in the corresponding coding process. Upsampling, however, results in loss of partial infected area characteristic information and thus access to the hopped connection with characteristic aggregation module. And finally classifying the pneumonia infection area and the background image through a Sigmoid activation function, and outputting an infection area segmentation result.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic topology diagram of an attention mechanism module according to the present invention; fig. 3 is a schematic topology diagram of a feature aggregation module provided in the present invention.
In one embodiment, the attention mechanism module SE is divided into two operations, extrusion (squeeze) and excitation (extraction), where the extrusion operation performs a global average pooling operation on the input feature map, so that each channel has global information, expressed by a mathematical formula as follows:
wherein X is an input feature map (output of each residual block), H, W, C represents the height, width and channel number of the feature map, respectively
The excitation operation stage can acquire the interdependence relationship among the channels of the characteristic diagram. The operation firstly inputs the extruded vector into a full connection layer to obtain a vector of 1 multiplied by (C/r) (the model r is set to be 16), uses ReLu functions to activate, then expands the number of channels from C/r to C through a full connection layer, and then calculates the weight coefficient s of the channels through a Sigmoid function to realize the excitation operation, wherein the calculation formula is as follows:
s=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))
where σ (·) is the sigmoid activation function, δ (·) is the ReLu function, w 1,w2 is the convolution kernel of the two fully connected layers. And finally multiplying the weight coefficient by the corresponding channel number to obtain a result feature map.
As shown in fig. 3, three types of lines represent different operations, straight lines represent connections between the same sizes, curves represent upward linear interpolation, and dashed curves represent downward interpolation. The dark and light convolution blocks represent the three inputs and outputs of the module. The main principle of the characteristic aggregation module is to fuse convolution blocks with different sizes together by using a bilinear interpolation method, so as to achieve the purpose of characteristic reuse. The main process is as follows: the convolution blocks with the sizes of 56 2 multiplied by 128 and 28 2 multiplied by 256 obtained in the encoding process are interpolated upwards by two times and four times respectively to obtain convolution blocks with the sizes of 112 2 multiplied by 128 and 112 2 multiplied by 256, and then the two convolution blocks after interpolation are fused with the input convolution block with the size of 112 2 multiplied by 64 by a concat method to obtain the convolution block with the size of 112 2 multiplied by 448. Then, convolution blocks of 56 2 ×448 and 28 2 ×448 were obtained by the same method.
In one implementation manner, the embodiment of the present invention further provides a method for dividing a pneumonia infection area, and referring to fig. 4, the details thereof are as follows:
Step S1: preprocessing the data of the data set, namely performing image enhancement on all CT images, finding out the outline of the lung parenchyma, cutting the part outside the outline, and reducing the influence of irrelevant parts to the greatest extent;
Step S2: the preprocessed image obtained in step S1 is input to the encoding part of the network, and basic features of the image are extracted through a residual block ResNet and an attention mechanism module SE, respectively, and the expression is as follows:
F0=Conv3×3(P)
Fi=fex(fsq(fre(Fi-1))
Wherein P is the pre-processed image, F 0 is the features of shallow extraction, conv 3×3 is the convolution kernel of size 3×3, F re is the ResNet module feature extraction function, F sq is the extrusion operation function in the attention mechanism, F ex is the excitation operation function, F i is the coding part i-th layer output (i=1, 2,3, 4);
Step S3: the features obtained in step S2 are input into a dense hole convolution (Dense Atrous Convolution, DAC) and a Residual multi-core pool (RMP) to capture more advanced features and preserve more spatial information, and the expression is as follows:
FDAC=Conv3×3(rate=1)(F4)+Conv3×3(rate=3)(F4)+Conv3×3(rate=5)(F4)
FRMP=maxpool2×2(FDAC)+maxpool3×3(FDAC)+maxpool5×5(FDAC)+maxpool6×6(FDAC) Wherein F DAC,FRMP is the output of the DAC module and the RMP module, maxpool i×i is the maximum pooling function with convolution kernel of i×i, and Conv 3×3(rate=j) is the 3×3 convolution with step size j;
step S4: inputting the features of different scales obtained in the step S2 into a feature fusion module, wherein the expression is as follows;
FFAM=(F1+fI(F2)+fI(F3)
f FAM is the fused feature, F i is the i-th layer output of the coding part, and F I is the interpolation function;
Step S5: the features obtained in the step S3 and the features fused in the step S4 are added and then input into a decoder part of a network, and a segmented result is obtained through upsampling and deconvolution, wherein the specific expression is as follows:
Fdst=fTranconv(FFAM+FRMP)
Wherein fTranconv denotes a deconvolution function, fdst denotes an output;
step S6: optimizing the image segmentation model through a loss function;
wherein Y= { Y1, Y2, & gtis, yb represents a true value of the value, Indicating the prediction probability, N indicates the batch size, sigma (·) corresponds to the sigmoid activation function, and alpha takes a value of 0.5.
In order to better illustrate the effectiveness of the present invention, the present embodiment also uses a comparison experiment to compare the segmentation effects. The present example uses COVID-19-CT-Scans dataset. The dataset consisted of 1600 two-dimensional CT images, all collected by the chinese radiology institute. The radiologist uses different labels to segment the CT image to identify the lung infected areas. 1456 sheets are used as training sets and 144 sheets are used as test sets. The images used in the examples of the present invention were pre-processed and then resized to 224 x 224 for training. All networks used in the examples of the present invention are implemented based on the Pytorch framework. Alpha of the loss function is set to 0.5; the learning rate was set to 0.0001; the epochs were set to 100 and the model was saved once for each 5 epochs. The batch size is set to 8, limited by the GPU size. The optimizer selects RMSprop algorithms for optimization. Model performance was assessed using a Dice Similarity Coefficient (DSC), sensitivity (SEN), specificity (SPEC), average cross-over ratio (MIOU).
The invention uses 144 pictures in COVID-19-CT-Scans dataset to test the performance of the model, the comparison experiment selects FCN, deepLabv3+, unet++ and CE-Net and other models to compare with the experimental result of the invention, the experimental result is shown in table 1, the performance of the invention on the Dice Similarity Coefficient (DSC), sensitivity (SEN), specificity (SPEC) and average cross-over ratio (MIOU) of COVID-19-CT-Scans dataset is respectively 74.32%,84.25%,99.14% and 80.34%, and compared with Ce-Net network, the performance of the invention on the Dice similarity coefficient, specificity and average cross-over ratio is respectively improved by 1.91%,0.16% and 1.26%, while the sensitivity of the invention is inferior to that of CE-Net, the other three performance indexes are obviously superior to that of the invention, and the whole is better. The performance comparison results show that the accuracy of segmentation is higher on the premise of ensuring the sensitivity, and the pneumonia infection area can be segmented better.
TABLE 1 comparison of the Performance of different networks (%)
Algorithm DSC SEN SPEC MIOU
FCN 63.03 68.08 99.14 74.34
DeepLabV3+ 62.95 72.72 98.86 74.11
UNet++ 67.14 83.52 98.40 75.96
CE-Net 72.41 85.17 98.98 79.08
Scheme of the present embodiment 74.32 84.25 99.14 80.34
As shown in fig. 5, in this embodiment, 5 images are selected in the test set, and the 5 models are used to segment the images, and the result is shown in fig. 5. The first column is an original pneumonia CT image, the second column is a label image, and the third column to the seventh column are FCN, deepLabV3+, unet++, CE-Net and the segmentation result of the invention in sequence. By comparing the first and second rows of results in fig. 4, it can be seen that the results obtained by the method provided by the invention are closest to the real label when the single target is segmented, and the results of the other four methods have larger differences in size, contour and the like than the real label. Meanwhile, due to incomplete feature extraction, the other four methods all have the phenomenon of error segmentation, and targets are segmented in uninfected areas, but the method does not have the phenomenon. By comparing the third, fourth and fifth rows, it can be seen that the results of the present invention are also superior to other methods when segmenting multiple targets. Especially, when small targets are segmented, compared with other four methods, the method can accurately segment each small target, and the segmented result is closest to a real label in terms of shape and size.
In conclusion, the invention provides an improved Ce-Net network model aiming at the problems of difficult segmentation and the like caused by unobvious characteristics of an infected area in a pneumonia CT image. The network firstly adds an attention mechanism SE module in the coding process to introduce global context information, so that the model pays attention to relevant characteristics of an infection area better in the learning process, and then a characteristic aggregation module is added on the original structure of the Ce-Net, and the module fully fuses the spatial information of high and low layers to obtain the characteristics with more discrimination capability, thereby obtaining better segmentation effect.
The above program design scheme provided in this embodiment may be stored in a computer readable storage medium in a coded form, implemented in a computer program, and input basic parameter information required for calculation through computer hardware, and output a calculation result.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations of methods, apparatus (means), and computer program products according to embodiments of the invention. It will be understood that each flow of the flowchart, and combinations of flows in the flowchart, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.
The patent is not limited to the best mode, any person can obtain other various methods and systems for distinguishing the CT image infection areas based on the improved CE-Net under the teaching of the patent, and all equivalent changes and modifications made according to the scope of the patent are covered by the patent.

Claims (5)

1. The method for distinguishing the infection areas of the CT image of the pneumonia based on the improved CE-Net is characterized by comprising the following steps:
Step S1: preprocessing data of a data set, carrying out image enhancement on all CT images, finding out the outline of lung parenchyma, and cutting out the part outside the outline;
Step S2: inputting the preprocessed image obtained in the step S1 into a coding part of a network, and extracting basic features of the image through a residual block ResNet and an attention mechanism module SE respectively;
step S3: inputting the features obtained in the step S2 into a dense cavity convolution DAC and a residual multi-core pool RMP, and capturing more advanced features and reserving more spatial information;
Step S4: inputting the features of different scales obtained in the step S2 into a feature fusion module; the feature fusion module FAM fuses convolution blocks with different sizes obtained in the encoding process by using a bilinear interpolation method, so that the purpose of feature reuse is achieved; the expression is as follows;
FFAM=(F1+fI(F2)+fI(F3))
f FAM is the fused feature, F i is the i-th layer output of the coding part, and F I is the interpolation function;
Step S5: the features obtained in the step S3 and the features fused in the step S4 are added and then input into a decoder part of a network, and a segmented result is obtained through upsampling and deconvolution, wherein the specific expression is as follows:
Fdst=fTranconv(FFAM+FRMP)
Where F Tranconv represents the deconvolution function and F dst represents the output;
Step S6: optimizing the segmentation method through a loss function;
in step S2, the coding part of the network includes three parts, the first part uses 13×3 convolution to extract shallow features F0, the second part uses 4 pre-trained ResNet modules to extract deep features, the third part adds an attention mechanism module to introduce global context information after ResNet modules, thereby enhancing receptive field in the feature extraction stage and increasing the weight of the target related feature channel;
The residual block ResNet takes a shallow layer feature F0 as an input feature, and performs superposition output on input and output through two convolution kernels with the size of 3 multiplied by 3 and then a shortcut; the attention mechanism module SE is divided into two operations: squeezing and excitation, wherein the squeezing operation carries out global average pooling operation on the input characteristic diagram, so that each channel has global information, and the method is expressed as follows by a mathematical formula:
Wherein X is an input feature map, namely the output of each residual block, H, W, C respectively represents the height, width and channel number of the feature map;
The excitation operation stage is used for acquiring the mutual dependency relationship among the channels of the feature map, the operation firstly inputs the extruded vector into a full-connection layer to obtain a vector of 1 multiplied by (C/r), r is a set constant, the vector is activated by using ReLu functions, then the number of the channels is expanded from C/r to C through one full-connection layer, and then the weight coefficient s of the channels is calculated through one Sigmoid function, so that the excitation operation is realized, and the calculation formula is as follows:
s=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z)) (2)
Wherein sigma (·) is a sigmoid activation function, delta (·) is ReLu functions, and w 1,w2 is a convolution kernel of two fully connected layers; and finally multiplying the weight coefficient by the corresponding channel number to obtain a result feature map.
2. The improved CE-Net based method for distinguishing between infection areas of a CT image of pneumonia according to claim 1, wherein: in step S1, the contrast of the image is enhanced by adopting a contrast-limiting self-adaptive histogram equalization algorithm, so that an infected area is more easily distinguished from a normal area, the outline of the lung parenchyma is found out by a canny algorithm, the part outside the outline is cut, and the influence of an irrelevant part is reduced to the greatest extent.
3. The improved CE-Net based method for distinguishing between infection areas of a CT image of pneumonia according to claim 1, wherein: in step S3, the dense-cavity convolution DAC has 4 cascaded branches, increasing from 1 to 1,3 and 5 with increasing number of atrous convolutions, the acceptance field of each branch being 3,7,9, 19, applying a1 x 1 convolution correction linear activation at each branch, the DAC block extracting features of objects of different sizes by convolving atrous in combination with different atrous rates;
the residual multi-core pool RMP module is provided with 4 receiving domains with different sizes, namely 2 multiplied by 2,3 multiplied by 3,5 multiplied by 5 and 6 multiplied by 6, the four convolution kernels with different sizes obtain 4 different characteristic information, 1 multiplied by 1 convolution is added after each layer of pooling, then the characteristic with the same size as the original characteristic is obtained through linear interpolation, and finally the original characteristic and the characteristic obtained through linear interpolation are connected.
4. The improved CE-Net based method for distinguishing between infection areas of a CT image of pneumonia according to claim 1, wherein: in step S6, the loss function is combined by a cross entropy loss function and a dice coefficient loss function, which is specifically expressed as follows:
wherein Y= { Y1, Y2, & gtis, yb represents a true value of the value, Indicating the prediction probability, N indicates the batch size, sigma (·) corresponds to the sigmoid activation function, and alpha takes a value of 0.5.
5. An improved CE-Net based pneumonia CT image infection area dividing system is characterized in that: based on a computer system, and the method for distinguishing the pneumonia CT image infection areas based on the improved CE-Net according to claim 1, an adopted image segmentation model comprises the following steps: the device comprises an encoding module, a context extraction module and a decoding module;
After being preprocessed, the pneumonia data set is input into the coding module, passes through a convolution kernel with the size of 3 multiplied by 3, then passes through 4 ResNet modules, each ResNet module is extruded and excited through an attention mechanism SE module, and then passes through a dense cavity convolution DAC and a residual multi-core pool RMP of a context extraction module, so as to capture more advanced features and retain more spatial information;
The decoding module consists of an up-sampling layer and a characteristic aggregation module; the up-sampling layer is composed of a deconvolution layer with the size of 3 multiplied by 3 and the step length of 2, the size of the output characteristic diagram is consistent with the size of the characteristic diagram in the corresponding coding process, the jumping connection with the characteristic aggregation module is accessed, finally, the pneumonia infection area and the background image are classified through a Sigmoid activation function, and the segmentation result of the infection area is output;
the attention mechanism module SE is divided into two operations of extrusion and excitation, wherein the extrusion operation carries out global average pooling operation on the input characteristic diagram, so that each channel has global information, and the operation is expressed as follows by using a mathematical formula:
Wherein X is an input feature map, namely the output of each residual block, H, W, C respectively represents the height, width and channel number of the feature map;
the excitation operation stage is used for acquiring the interdependence relationship among the channels of the feature map; the operation firstly inputs the extruded vector into a full connection layer to obtain a vector of 1 multiplied by (C/r), r is a set constant, the vector is activated by using ReLu functions, then the channel number is expanded from C/r to C through the full connection layer, and then the weight coefficient s of the channel is calculated through a Sigmoid function, so that the excitation operation is realized, and the calculation formula is as follows:
s=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))
wherein sigma (·) is a sigmoid activation function, delta (·) is ReLu functions, and w 1,w2 is a convolution kernel of two fully connected layers; finally multiplying the weight coefficient by the corresponding channel number to obtain a result feature map;
The method comprises the steps of firstly interpolating up two times and four times of convolution blocks with the sizes of 56 2 multiplied by 128 and 28 2 multiplied by 256 obtained in the encoding process to obtain convolution blocks with the sizes of 112 2 multiplied by 128 and 112 2 multiplied by 256 respectively, and fusing the two convolution blocks after interpolation with the input convolution block with the size of 112 2 multiplied by 64 by a concat method to obtain a convolution block with the size of 112 2 multiplied by 448; then, convolution blocks of 56 2 ×448 and 28 2 ×448 were obtained by the same method.
CN202210009185.7A 2022-01-06 2022-01-06 Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net Active CN114332133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210009185.7A CN114332133B (en) 2022-01-06 2022-01-06 Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210009185.7A CN114332133B (en) 2022-01-06 2022-01-06 Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net

Publications (2)

Publication Number Publication Date
CN114332133A CN114332133A (en) 2022-04-12
CN114332133B true CN114332133B (en) 2024-07-30

Family

ID=81024714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210009185.7A Active CN114332133B (en) 2022-01-06 2022-01-06 Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net

Country Status (1)

Country Link
CN (1) CN114332133B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882282A (en) * 2022-05-16 2022-08-09 福州大学 Neural network prediction method for colorectal cancer treatment effect based on MRI and CT images
WO2024046142A1 (en) * 2022-08-30 2024-03-07 Subtle Medical, Inc. Systems and methods for image segmentation of pet/ct using cascaded and ensembled convolutional neural networks
CN116702065B (en) * 2023-05-30 2024-04-16 浙江时空智子大数据有限公司 Method and system for monitoring ecological treatment pollution of black and odorous water based on image data
CN116543167B (en) * 2023-07-04 2023-09-05 真健康(北京)医疗科技有限公司 CT image segmentation method and device
CN116563285B (en) * 2023-07-10 2023-09-19 邦世科技(南京)有限公司 Focus characteristic identifying and dividing method and system based on full neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028242A (en) * 2019-11-27 2020-04-17 中国科学院深圳先进技术研究院 Automatic tumor segmentation system and method and electronic equipment
CN112233117A (en) * 2020-12-14 2021-01-15 浙江卡易智慧医疗科技有限公司 New coronary pneumonia CT detects discernment positioning system and computing equipment
CN112927240B (en) * 2021-03-08 2022-04-05 重庆邮电大学 CT image segmentation method based on improved AU-Net network
CN113160226B (en) * 2021-05-24 2024-06-07 苏州比格威医疗科技有限公司 Classification segmentation method and system for AMD lesion OCT image based on bidirectional guidance network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于改进CENet的新冠肺炎CT图像感染区域分割";邱纯乾;《传感器与微系统》;20231130;全文 *

Also Published As

Publication number Publication date
CN114332133A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN114332133B (en) Method and system for distinguishing pneumonia CT image infection areas based on improved CE-Net
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN111027493B (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN109840556B (en) Image classification and identification method based on twin network
CN108664981B (en) Salient image extraction method and device
CN111145181B (en) Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN113139543B (en) Training method of target object detection model, target object detection method and equipment
CN113343982B (en) Entity relation extraction method, device and equipment for multi-modal feature fusion
CN111681273A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN111160229B (en) SSD network-based video target detection method and device
WO2023116632A1 (en) Video instance segmentation method and apparatus based on spatio-temporal memory information
CN111860683B (en) Target detection method based on feature fusion
CN113870286B (en) Foreground segmentation method based on multi-level feature and mask fusion
CN110942471A (en) Long-term target tracking method based on space-time constraint
CN114861842B (en) Few-sample target detection method and device and electronic equipment
CN110866938B (en) Full-automatic video moving object segmentation method
CN111967464A (en) Weak supervision target positioning method based on deep learning
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114419406A (en) Image change detection method, training method, device and computer equipment
CN114998756A (en) Yolov 5-based remote sensing image detection method and device and storage medium
CN113762265A (en) Pneumonia classification and segmentation method and system
CN116843971A (en) Method and system for detecting hemerocallis disease target based on self-attention mechanism
CN115631112B (en) Building contour correction method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant