[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115331011A - Optic disc dividing method based on convolution nerve network - Google Patents

Optic disc dividing method based on convolution nerve network Download PDF

Info

Publication number
CN115331011A
CN115331011A CN202211084181.1A CN202211084181A CN115331011A CN 115331011 A CN115331011 A CN 115331011A CN 202211084181 A CN202211084181 A CN 202211084181A CN 115331011 A CN115331011 A CN 115331011A
Authority
CN
China
Prior art keywords
image
network
conv
net network
net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211084181.1A
Other languages
Chinese (zh)
Inventor
刘思语
管军霖
何宇翔
王小龙
廖思贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202211084181.1A priority Critical patent/CN115331011A/en
Publication of CN115331011A publication Critical patent/CN115331011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a optic disc segmentation method based on a convolutional neural network, which is characterized by comprising the following steps of: 1) Defining a TU-Net network; 2) Building a U-Net network; 3) Positioning the video disc according to the output annotation image; 4) Preprocessing the image; 5) Constructing an AU-Net network; 6) Setting a training strategy; 7) Setting a loss function; 8) Training a network and updating parameters; 9) Carrying out image post-processing on the output image; 10 Set evaluation criteria. The method is simple to implement, has strong universality and can reduce the noise influence of the fundus image.

Description

Optic disc dividing method based on convolution nerve network
Technical Field
The invention relates to the field of computer vision and the field of image processing, in particular to a method for segmenting an optic disc based on a convolutional neural network.
Background
The retinal image features are complex, and the color fundus image often has the problems of uneven illumination and blood vessel central reflection, so that the contrast ratio of the optic disc boundary and the background is low, the existing method is often influenced by external noise, and the optic disc segmentation is inaccurate, so that the research of an accurate and efficient optic disc segmentation method is very important.
Since 2010, deep learning has been successful in the field of computer vision, and image semantic segmentation methods based on Convolutional Neural Networks (CNN) have made a breakthrough progress. Compared with human eye observation, the CNN can automatically extract deeper features from the image, so the CNN has obvious advantages in image segmentation. In the image segmentation algorithm based on deep learning, U-Net has important significance in the field of medical image segmentation. Many researchers have proposed a number of U-Net variant structures based on U-Net networks. The document IEEE transactions on medical imaging,2019.38 (10): p.2281-2292 proposes CE-Net: the characteristics of different receptive fields are extracted by using multi-branch convolution, and ResNet is transferred in an encoder, so that the training speed is increased; zhuang, j. proposes LadderNet: skip connections are added between each pair of adjacent decoders and decoder branches of each level, and weight sharing is used for reducing the number of parameters, but the calculation cost is not reduced; li, h, et al propose a pyramid attention network: an attention mechanism and a spatial pyramid are combined for extracting accurate features; al-Bander, B, et Al proposed a combined optic disc and optic cup segmentation method that combines the full convolution network and the DenseNet network, which can segment the optic disc area well, but has the disadvantage of long training time.
Disclosure of Invention
The invention aims to provide an optic disc segmentation method based on a convolutional neural network aiming at the defects in the prior art. The method is simple to implement and strong in universality, and can reduce the noise influence of the fundus image.
The technical scheme for realizing the purpose of the invention is as follows:
a optic disc segmentation method based on a convolutional neural network comprises the following steps:
1) Defining a TU-Net network: the TU-Net network consists of a U-Net network and an AU-Net network;
2) Building a U-Net network: the network model structure presents U-shaped symmetry and is provided with an encoding path and a decoding path, the encoding path consists of four Down-Conv modules, each Down-Conv module is provided with two convolution layers Conv consisting of 3 multiplied by 3 convolution and ReLU linear activation functions and a 2 multiplied by 2 maximum pooling layer for Down sampling, the decoding path consists of 4 Up-Conv modules, each Up-Conv module consists of a deconvolution layer for Up sampling and two convolution layers, each Down-Conv module and Up-Conv module are connected by a jump connection layer, and the jump connection layer enables the U-Net network to fuse bottom layer characteristics and high layer characteristics, so that characteristics of different layers of pictures are included in a finally output characteristic diagram, and the segmentation accuracy of the model is improved; the U-Net network has the following functions: the U-Net network can roughly position the position of the optic disc, and a segmentation graph with the same size as the original graph can be obtained by adopting the U-Net network;
3) And positioning the video disc according to the output annotation image: a characteristic image can be obtained through a U-Net network, the characteristic image approximately determines a video disc area, edge detection processing is carried out on the characteristic image, area center coordinates are obtained according to the obtained boundary information, and the center coordinates are the approximate position of the video disc in an original retina image;
4) Cutting the original retina image according to the coordinate information obtained in the step 3) and preprocessing the image, wherein the method comprises the following steps:
4-1), cutting: cutting out an image with the size of 200x200 according to the video disc positioning information obtained in the step 3), wherein the center of the image is a video disc area;
4-2) image preprocessing: as shown in fig. 4, firstly, the image enhancement processing is performed on the color fundus image of the extracted B channel by using the CLAHE algorithm for limiting contrast self-adaptive histogram equalization, in order to better eliminate the influence of central vessels and background noise of the color fundus image, the enhanced image is processed by using morphological closed operation, and finally, the polar coordinate transformation is performed on the output image and the labeled image which is obtained by cutting in the step 4-1) and corresponds to the input image;
5) And (3) constructing an AU-Net network: AU-Net network is formed by 4 Res-Conv convolution blocks and decoding path by 4 Up-Conv modules, the Up-Conv module is consistent with the decoding module in the U-Net network in step 1), compared with the classic U-Net network, AU-Net adopts Res-Conv convolution module to replace the traditional coding module in the coding path, res-Conv module adopts ResNet network idea, so that the network can make full use of the learned characteristics, each Res-Conv module and Up-Conv module are connected by jump connection layer, jump connection layer can make the network merge the bottom layer characteristics and the high layer characteristics, so that the characteristics of different layers of the picture are included in the finally output characteristic diagram, thereby improving the segmentation accuracy of the model, AGs adopt attention gate AGs, AGs is connected by Oktay, O.
6) Setting a training strategy: the method adopts an SGD optimization algorithm, sets the batch processing size BatchSize to be 4, the training period epoch to be 200 and the initial learning rate to be 0.001, can realize pixel level segmentation on an input image through an AU-Net network, and can accurately segment a video disc region;
7) Setting a loss function: the network training is carried out by adopting a loss function combining cross entropy and Dice loss as a loss function of the text, and the optic disc segmentation problem can be regarded as a binary classification problem of pixel points, so the text adopts the cross entropy loss function, the Dice loss function is commonly used for calculating the similarity of two samples, in order to improve the segmentation accuracy on the whole, the Dice loss function is introduced as a part of the loss function, and the cross entropy function is shown as a formula (1):
Figure BDA0003834378890000031
wherein y is i Is the label of the ith pixel, the optic disc is 1, the background is 0 i Representing the probability that the ith pixel is predicted to be the optic disc,
the Dice loss function is shown in equation (2):
Figure BDA0003834378890000032
wherein | a | and | B | respectively represent the number of the group try and the Predict mask, and the loss function adopted by the technical scheme is shown in formula (3):
L Loss =L Cross-Entropy +L Dice
(3),
8) Training the network and updating the parameters: training the AU-Net network according to the training strategy set in the step 6), wherein the AU-Net network updates the weight and the offset in the AU-Net network by adopting a back propagation algorithm and is dynamically maintained by using a loss function in the training process;
9) And (3) carrying out image post-processing on the output image: firstly, carrying out inverse polar coordinate transformation on an output prediction image to obtain a prediction image under a Cartesian coordinate system, and reducing the size of the output prediction image into the size of an original image according to positioning information;
10 Set evaluation criteria: in order to evaluate the network model segmentation effect, the technical scheme selects accuracy Acc, specificity Spe, sensitivity Se and F1-Score for evaluation, and the calculation formulas of the accuracy Acc, the specificity Spe, the sensitivity Se and the F1-Score are respectively shown as a formula (4), a formula (5), a formula (6), a formula (7) and a formula (8):
Figure BDA0003834378890000041
Figure BDA0003834378890000042
Figure BDA0003834378890000043
Figure BDA0003834378890000044
Figure BDA0003834378890000045
TP, TN, FP and FN respectively represent true positive, true negative, false positive and false negative, wherein TP represents the number of pixel points which predict that the optic disc is true and also the optic disc is true; TN represents the number of pixel points which are predicted to be background and are real and background; FP represents the number of pixel points which are predicted to be the optic disc and are really the background; FN represents the number of pixel points which are predicted to be background and really are video discs;
11 Evaluation network model: comparing the optic disc segmentation image predicted by the technical scheme with the marked image in the data set in the step 1), and evaluating the performance of the model according to the evaluation standard.
The technical scheme is established on a U-Net neural network framework, a novel image segmentation network TU-Net is provided, the TU-Net comprises a front-end U-Net network and a rear-end AU-Net network, the U-Net network is a pre-trained network and can roughly segment the position of a video disk, an image with the size of 200x200 and a central area as the video disk can be obtained through the U-Net network, the image enters the AU-Net network after being pre-processed, the AU-Net network can segment the area of the video disk accurately and output a segmented image with the size of 200x200, and finally the final video disk segmented image is restored by utilizing the previous positioning information, and the AU-Net adopts a Res-Conv module provided by the technical scheme to replace the original two-time convolution module in the downsampling process: specifically, the Res-Conv module provided in the present technical solution first performs 2 times of 1x1 convolution and batch normalization, 1 time of 3x3 convolution, batch normalization and ReLU operation on the input feature map a, and then adds the obtained feature map to a and performs 1 time of ReLU operation to obtain a feature map B; adding the feature graph B after 1 time of 3x3 convolution and batch normalization of the feature graph B, and obtaining a feature graph C after 1 time of ReLU operation; the characteristic diagram C enters the next layer after 1 time of 2x2 down sampling, compared with other modules, the Res-Conv module adds skip connection between input and output, and the original characteristic diagram is combined with the characteristic diagram after convolution, so that the characteristic loss can be effectively reduced, and the network degradation phenomenon along with the network deepening is solved; according to the technical scheme, attention gates AGs are adopted in the jump connection part of the original U-Net network, the AGs can inhibit characteristic response in an irrelevant background area while keeping position information, so that the characteristics of the video disc are more prominent, and the sensitivity and the prediction accuracy of the model can be improved while the calculation efficiency of AU-Net is kept.
The technical scheme has the advantages that:
1. according to the technical scheme, the mode of positioning the optic disk area by the U-Net can effectively eliminate the noise influence of the non-optic disk area;
2. the technical scheme adopts an AU-Net network: AU-Net adopts Res-Conv convolution module to replace traditional coding module in coding path, res-Conv module utilizes ResNet network thought in design, so that network can make full use of learned characteristics, each Res-Conv module and Up-Conv module are connected by jump connection layer, jump connection layer can make network fuse bottom layer characteristics and high layer characteristics, so that characteristics of different layers of picture are included in final output characteristic diagram, thereby improving segmentation accuracy of model, AU-Net introduces attention gate AGs on jump connection, AGs can be used for inhibiting characteristic response in irrelevant background area while keeping position information, making optic disc characteristics more prominent, and AU-Net can improve sensitivity and prediction accuracy of model while keeping calculation efficiency;
3. the technical scheme adopts the modes of single channel extraction, polar coordinate transformation and CLAHE image preprocessing, and can effectively improve the contrast between the optic disc region and the background, thereby improving the optic disc segmentation accuracy.
The method is simple to implement, has strong universality and can reduce the noise influence of the fundus image.
Drawings
FIG. 1 is a schematic diagram of a TU-Net network structure in an embodiment;
FIG. 2 is a schematic diagram of an AU-Net network structure in an embodiment;
FIG. 3 is a diagram illustrating the structure of a Res-Conv module in an embodiment;
FIG. 4 is a diagram illustrating the result of image preprocessing in an embodiment;
the specific implementation mode is as follows:
the invention will be further illustrated by the following figures and examples, but is not limited thereto.
Example (b):
a optic disc segmentation method based on a convolutional neural network comprises the following steps:
1) Defining TU-Net network: as shown in FIG. 1, the TU-Net network is composed of a U-Net network and an AU-Net network;
2) Building a U-Net network: the network model structure presents U-shaped symmetry and is provided with an encoding path and a decoding path, the encoding path consists of four Down-Conv modules, each Down-Conv module is provided with two convolution layers Conv consisting of 3x3 convolution and ReLU linear activation functions and a 2x2 maximum pooling layer for Down sampling, the decoding path consists of 4 Up-Conv modules, each Up-Conv module consists of a deconvolution layer for Up sampling and two convolution layers, each Down-Conv module and Up-Conv module are connected by a jump connection layer, and the jump connection layer enables the U-Net network to be capable of fusing bottom layer characteristics and high layer characteristics, so that characteristics of different layers of pictures are included in a finally output characteristic diagram, and the segmentation accuracy of the model is improved; the U-Net network has the following functions: the U-Net network can roughly position the position of the optic disc, and a segmentation graph with the same size as the original graph can be obtained by adopting the U-Net network;
3) And positioning the video disc according to the output annotation image: obtaining a characteristic image through a U-Net network, wherein the characteristic image approximately determines a disc area, carrying out edge detection processing on the characteristic image, and obtaining area center coordinates through the obtained boundary information, wherein the coordinates are the approximate position of the disc in an original retina image;
4) Cutting the original retina image according to the coordinate information obtained in the step 3) and preprocessing the image, wherein the method comprises the following steps:
4-1) cutting: cutting out an image with the size of 200x200 according to the video disc positioning information obtained in the step 3), wherein the center of the image is a video disc area;
4-2) image preprocessing: firstly, performing image enhancement processing on a color fundus image of an extracted B channel by adopting a contrast-limited self-adaptive histogram equalization CLAHE algorithm, in order to better eliminate the influence of central blood vessels and background noise of the color fundus image, processing the enhanced image by adopting morphological closed operation, and finally performing polar coordinate transformation on an output image and a labeled image which is obtained by cutting in the step 4-1) and corresponds to the input image;
5) And (3) constructing an AU-Net network: as shown in fig. 2, the AU-Net network comprises 4 Res-Conv convolution blocks to form a decoding path, and 4 Up-Conv modules to form an encoding path, wherein the Up-Conv modules are consistent with the decoding modules in the U-Net network in step 1), compared with the classical U-Net network, the AU-Net network adopts Res-Conv convolution modules to replace the traditional encoding modules in the encoding path, and as shown in fig. 3, res-Conv modules adopt the concept of ResNet network, so that the network can make full use of the learned features, each Res-Conv module and Up-Conv module are connected by a skip connection layer, and the skip connection layer can enable the network to merge the bottom layer features and the high layer features, so that the finally output feature map contains the different layer features of the picture, thereby improving the segmentation accuracy of the model, and the AU-Net network adopts the attention gate AGs on the skip connection, the AGs are earliest by Oktay, o, so that the sensitivity of the agus information is highlighted, thereby improving the accuracy of model partitioning, and the accuracy of the prediction of the background position, and the prediction efficiency of the AU-Net model are improved;
6) Setting a training strategy: the SGD optimization algorithm is adopted, the batch processing size BatchSize is set to be 4, the training period epoch is set to be 200, the initial learning rate is set to be 0.001, pixel level segmentation can be carried out on an input image through an AU-Net network, and a video disc area can be accurately segmented;
7) Setting a loss function: the network training is carried out by adopting a loss function combining cross entropy and Dice loss as a loss function of the text, and the optic disc segmentation problem can be regarded as a binary classification problem of pixel points, so the text adopts the cross entropy loss function, the Dice loss function is commonly used for calculating the similarity of two samples, in order to improve the segmentation accuracy on the whole, the Dice loss function is introduced as a part of the loss function, and the cross entropy function is shown as a formula (1):
Figure BDA0003834378890000071
wherein y is i Is the label of the ith pixel, the optic disc is 1, the background is 0 i Representing the probability that the ith pixel is predicted to be the optic disc,
the Dice loss function is shown in equation (2):
Figure BDA0003834378890000072
where | A | and | B | represent the number of group route and Predict mask, respectively, the loss function used in this example is shown in equation (3):
L Loss =L Cross-Entropy +L Dice
(3),
8) Training the network and updating the parameters: training the AU-Net network according to the training strategy set in the step 6), wherein the AU-Net network updates the weight and the offset in the AU-Net network by adopting a back propagation algorithm and is dynamically maintained by using a loss function in the training process;
9) And carrying out image post-processing on the output image: firstly, carrying out inverse polar coordinate transformation on an output prediction image to obtain a prediction image under a Cartesian coordinate system, and reducing the size of the output prediction image into the size of an original image according to positioning information;
10 Set evaluation criteria: in order to evaluate the network model segmentation effect, the accuracy Acc, the specificity Spe, the sensitivity Se and the F1-Score are selected for evaluation, and the calculation formulas are respectively shown as formula (4), formula (5), formula (6), formula (7) and formula (8):
Figure BDA0003834378890000081
Figure BDA0003834378890000082
Figure BDA0003834378890000083
Figure BDA0003834378890000084
Figure BDA0003834378890000085
TP, TN, FP and FN respectively represent true positive, true negative, false positive and false negative, wherein TP represents the number of pixel points which predict that the optic disc is true and also the optic disc is true; TN represents the number of pixel points which are predicted to be background and are real and background; FP represents the number of pixel points which are predicted to be the optic disc and are really the background; FN represents the number of pixel points of which the prediction is background and the prediction is truly video disc;
11 Evaluation network model: comparing the optic disc segmentation image predicted by the method with the annotation image in the data set in the step 1), and evaluating the performance of the model according to the evaluation standard.

Claims (1)

1. A optic disc segmentation method based on a convolutional neural network is characterized by comprising the following steps:
1) Defining TU-Net network: the TU-Net network consists of a U-Net network and an AU-Net network;
2) Building a U-Net network: the network model structure presents a U-shaped symmetrical structure and is provided with an encoding path and a decoding path, the encoding path consists of four Down-Conv modules, each Down-Conv module is provided with two Conv convolutional layers consisting of 3x3 convolution and ReLU linear activation functions and a 2x2 maximum pooling layer for Down sampling, the decoding path consists of 4 Up-Conv modules, each Up-Conv module consists of an anti-convolutional layer for Up sampling and two convolutional layers, each Down-Conv module and Up-Conv module are connected by a jump connection layer, and the jump connection layer enables the U-Net network to fuse bottom layer features and high layer features, so that feature maps of different layers of original input pictures are contained in final output feature maps;
3) And positioning the video disc according to the output annotation image: obtaining a characteristic image through a U-Net network, roughly determining a video disc area by the characteristic image, carrying out edge detection processing on the characteristic image, and solving area center coordinates according to the obtained boundary information, wherein the center coordinates are the approximate position of the video disc in the original retina image;
4) Cutting the retina image in the data set and the labeled image corresponding to the image according to the coordinate information obtained in the step 3), and preprocessing the image, wherein the method comprises the following steps:
4-1) cutting: cutting out an image with the size of 200x200 according to the video disc positioning information obtained in the step 3), wherein the center of the image is a video disc area;
4-2) image preprocessing: firstly, performing image enhancement processing on a color fundus image of an extracted B channel by using a contrast-limited self-adaptive histogram equalization CLAHE algorithm, then processing the enhanced image by using morphological closed operation, and finally performing polar coordinate transformation on an output image and a labeled image which is obtained by cutting in the step 4-1) and corresponds to the input image, wherein the image preprocessing process is shown in figure 4;
5) And (3) constructing an AU-Net network: the AU-Net network forms a decoding path by 4 Res-Conv convolution blocks, a coding path is formed by 4 Up-Conv modules, the Up-Conv modules are consistent with the decoding modules in the U-Net network in the step 1), the AU-Net adopts Res-Conv convolution modules to replace traditional coding modules in the coding path, the Res-Conv modules adopt ResNet network thought, each Res-Conv module and the Up-Conv module are connected by a jump connection layer, the jump connection layer enables the network to fuse bottom layer features and high layer features, the finally output feature diagram contains features of different layers of pictures, and the AU-Net adopts attention gates AGs on the jump connection;
6) Setting a training strategy: setting the batch processing size BatchSize to be 4, the training period epoch to be 200 and the initial learning rate to be 0.001 by adopting an SGD optimization algorithm;
7) Setting a loss function: the method is characterized in that a loss function combining cross entropy and Dice loss is used as a loss function of the text for network training, the optic disc segmentation problem is regarded as a binary classification problem of pixel points, the Dice loss function is introduced as a part of the loss function, and the cross entropy function is shown as a formula (1):
Figure FDA0003834378880000021
wherein y is i Is the label of the ith pixel, the optic disc is 1, the background is 0 i Representing the probability that the ith pixel is predicted as the optic disk, the Dice loss function is shown in equation (2):
Figure FDA0003834378880000022
where | A | and | B | represent the number of group try and Predict mask, respectively, the loss function is shown in equation (3):
Figure FDA0003834378880000023
8) Training the network and updating the parameters: training the AU-Net network according to the training strategy set in the step 6), wherein the AU-Net network updates the weight and the offset in the AU-Net network by adopting a back propagation algorithm and is dynamically maintained by using a loss function in the training process;
9) And carrying out image post-processing on the output image: firstly, performing inverse polar coordinate transformation on an output predicted image to obtain a predicted image under a Cartesian coordinate system, and reducing the size of the output predicted image into the size of an original image according to positioning information;
10 Set evaluation criteria: the network model segmentation effect is evaluated by selecting accuracy Acc, specificity Spe, sensitivity Se and F1-Score, and the calculation formulas are respectively shown as formula (4), formula (5), formula (6), formula (7) and formula (8):
Figure FDA0003834378880000024
Figure FDA0003834378880000025
Figure FDA0003834378880000026
Figure FDA0003834378880000027
Figure FDA0003834378880000031
TP, TN, FP and FN respectively represent true positive, true negative, false positive and false negative, wherein TP represents the number of pixel points which predict that the optic disc is true and also the optic disc is true; TN represents the number of pixel points which predict as background and are true as background; FP represents the number of pixel points which are predicted to be the optic disc and are really the background; FN represents the number of pixel points which are predicted to be background and really are video discs;
11 Evaluate the network model: comparing the optic disc segmentation image predicted by the technical scheme with the marked image in the data set in the step 1), and evaluating the performance of the model according to an evaluation standard.
CN202211084181.1A 2022-09-06 2022-09-06 Optic disc dividing method based on convolution nerve network Pending CN115331011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211084181.1A CN115331011A (en) 2022-09-06 2022-09-06 Optic disc dividing method based on convolution nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211084181.1A CN115331011A (en) 2022-09-06 2022-09-06 Optic disc dividing method based on convolution nerve network

Publications (1)

Publication Number Publication Date
CN115331011A true CN115331011A (en) 2022-11-11

Family

ID=83930083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211084181.1A Pending CN115331011A (en) 2022-09-06 2022-09-06 Optic disc dividing method based on convolution nerve network

Country Status (1)

Country Link
CN (1) CN115331011A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188492A (en) * 2023-02-21 2023-05-30 北京长木谷医疗科技有限公司 Hip joint segmentation method, device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188492A (en) * 2023-02-21 2023-05-30 北京长木谷医疗科技有限公司 Hip joint segmentation method, device, electronic equipment and computer readable storage medium
CN116188492B (en) * 2023-02-21 2024-04-26 北京长木谷医疗科技股份有限公司 Hip joint segmentation method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110992382B (en) Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening
Zhou et al. D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN111242288B (en) Multi-scale parallel deep neural network model construction method for lesion image segmentation
CN113807355A (en) Image semantic segmentation method based on coding and decoding structure
CN110675411A (en) Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN114820635A (en) Polyp segmentation method combining attention U-shaped network and multi-scale feature fusion
CN114332462B (en) MRI (magnetic resonance imaging) segmentation method aiming at brain lesion integration attention mechanism
Zhao et al. Al-net: Attention learning network based on multi-task learning for cervical nucleus segmentation
CN112508864A (en) Retinal vessel image segmentation method based on improved UNet +
CN112329780B (en) Depth image semantic segmentation method based on deep learning
CN110956222B (en) Method for detecting network for underwater target detection
CN115984172A (en) Small target detection method based on enhanced feature extraction
CN114519807B (en) Global self-attention target detection method combining spatial attention of channels
CN115311194A (en) Automatic CT liver image segmentation method based on transformer and SE block
CN112288749A (en) Skull image segmentation method based on depth iterative fusion depth learning model
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN113705670B (en) Brain image classification method and device based on magnetic resonance imaging and deep learning
CN114821052A (en) Three-dimensional brain tumor nuclear magnetic resonance image segmentation method based on self-adjustment strategy
CN114332098A (en) Carotid artery unstable plaque segmentation method based on multi-sequence magnetic resonance image
CN113269734A (en) Tumor image detection method and device based on meta-learning feature fusion strategy
CN115331011A (en) Optic disc dividing method based on convolution nerve network
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
Iqbal et al. LDMRes-Net: Enabling real-time disease monitoring through efficient image segmentation
CN113409243B (en) Blood vessel segmentation method combining global and neighborhood information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination