CN114926394B - Colorectal cancer pathological image segmentation method based on pixel contrast learning - Google Patents
Colorectal cancer pathological image segmentation method based on pixel contrast learning Download PDFInfo
- Publication number
- CN114926394B CN114926394B CN202210371764.6A CN202210371764A CN114926394B CN 114926394 B CN114926394 B CN 114926394B CN 202210371764 A CN202210371764 A CN 202210371764A CN 114926394 B CN114926394 B CN 114926394B
- Authority
- CN
- China
- Prior art keywords
- pixel
- encoder
- contrast learning
- image
- pathological image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000001575 pathological effect Effects 0.000 title claims abstract description 29
- 238000003709 image segmentation Methods 0.000 title claims abstract description 20
- 206010009944 Colon cancer Diseases 0.000 title description 2
- 208000001333 Colorectal Neoplasms Diseases 0.000 title description 2
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 208000015634 Rectal Neoplasms Diseases 0.000 claims abstract description 15
- 206010038038 rectal cancer Diseases 0.000 claims abstract description 15
- 201000001275 rectum cancer Diseases 0.000 claims abstract description 15
- 230000004913 activation Effects 0.000 claims abstract description 4
- 238000010606 normalization Methods 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 5
- 230000007547 defect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000007796 conventional method Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000010276 construction Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a rectal cancer pathological image segmentation method based on pixel contrast learning, which belongs to the technical field of image segmentation methods and comprises the following steps of: randomly cutting two subgraphs from an image, and keeping the pixel characteristics corresponding to the subgraphs consistent; a pair of pictures is subjected to two different enhancements and then input into the encoder network and momentum encoder network in the model, both of which are composed of a ResNet connected to a projection head (two 1x1 convolutions interspersed with a batch normalization and Relu activation layer). According to the invention, the encoder is trained through contrast learning to replace the original pre-trained ResNet, an improved pathological image segmentation model can be brought in the aspect of feature extraction performance without additional annotation, and the problem that the traditional contrast learning is inconsistent in downstream segmentation task based on instance-level Loss is solved through introducing the contrast learning of pixel-level Loss.
Description
Technical Field
The invention belongs to the technical field of image segmentation methods, and particularly relates to a rectal cancer pathological image segmentation method based on pixel contrast learning.
Background
At present, diagnosis of the rectal cancer needs to be analyzed through pathological images, and a pathologist evaluates the cancer cell load by quantitatively analyzing the area of a pathological tissue area in the images, so as to specify a diagnosis and treatment plan. However, manual observation is time consuming and laborious and requires a long experience accumulation. Therefore, the automatic segmentation of the lesion tissue area is realized, and the diagnosis and treatment efficiency can be greatly improved.
With the successful application of deep learning in the medical field, the pathological image field is also continuously developed. However, due to insufficient data, the segmentation model often adopts natural image pre-trained ResNet for feature extraction, and thus the learned feature mapping is often not optimal. Therefore, how to improve the feature extraction capability of the model is a urgent problem to be solved.
The self-supervision learning mode can be used without additional labels by contrast learning. The main idea is to pull the mapping distance between different samples (negative examples) by pulling the distance of similar samples (positive examples) in the mapping space. In this field, a pre-task (pre tasks) is generally set to construct positive and negative examples, for example, in the image field, an original image is a base sample, an enhanced (converted) version thereof serves as a positive example, and the rest of images in a batch or training data serve as a negative example, so many experiments in the natural image segmentation field have proved that such a method can improve the performance of a feature extractor, so as to better assist a downstream task.
The existing pathological image segmentation model has the following defects:
(1) Because of the lack of labeling data, an image net pre-training model is often adopted by an encoder of a segmentation network, but the model is trained on a natural image and only has sub-good feature mapping for a pathological image.
(2) Most of contrast learning is mainly performed through instance-level differences, while image segmentation is mainly based on pixel-level features, which causes dislocation of upstream and downstream tasks.
(3) Although the contrast learning positive example construction capability is greatly improved by benefiting from various powerful image enhancement modes, the designs are mainly aimed at natural images, and do not have excessive research on the enhancement of pathological images.
Disclosure of Invention
The invention aims at: in order to solve the problems of the background technology, a rectal cancer pathological image segmentation method based on pixel contrast learning is provided.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a rectal cancer pathological image segmentation method based on pixel contrast learning specifically comprises the following steps:
s1, pixel contrast learning: randomly cutting two subgraphs from an image, and keeping the pixel characteristics corresponding to the subgraphs consistent;
s2, a pair of pictures is subjected to two different enhancements and then is input into an encoder network and a momentum encoder network in a model, wherein the encoder network and the momentum encoder network are both formed by connecting a projection head (two 1x1 convolutions and a batch normalization and Relu activation layer is interposed between the two projection heads) by ResNet;
s3, randomly cutting out two sub-feature images from the feature images of the two different enhanced images, and then calculating the consistency loss between the two sub-images so as to pull up the mapping representation distance;
s4, based on S3, the spatial distance judgment is also reflected, namely, the spatial distance of the pixel set between the two characteristic subgraphs is calculated, if the distance between the two characteristic subgraphs exceeds a set threshold valueThe loss is not calculated and if the threshold is not exceeded, the situation that all pixels in the image are finally attributed to one value can be prevented by calculating the loss;
s5, introducing a new image enhancement mode: the method has the advantages that the RandAugment is introduced, the original enhancement mode is added into the RandAugment, the property of the RandAugment limits the enhancement strength and the number of enhancement methods used, and the defect of excessive enhancement modes is avoided;
s6, image segmentation: extracting features of the image by using an encoder, and then performing segmentation mask prediction on the image by using a decoder;
and S7, the encoder part transfers the characteristic encoder obtained by contrast learning training to an encoder module of the U-Net to replace the original image Net pre-trained encoder, and then inputs the pathological image slice into a network to finish segmentation.
As a further description of the above technical solution:
in the step S1, the purpose is to pull up the mapping representation of different sub-pixels with similar distances for learning.
As a further description of the above technical solution:
in S2, unlike the conventional contrast learning method, the pixel contrast learning maps the feature into a feature map, and the conventional method maps the feature into a vector.
As a further description of the above technical solution:
in the S1-S4, two losses are proposed in the pixel contrast learning, one isFor calculating pixel loss between subgraphs, the other is +.>For calculating the loss after projection through pixel propagation.
As a further description of the above technical solution:
the saidOne is a conventional encoder with a pixel propagation module for generating smooth features; the other is a momentum encoder without a propagation module, a conventional encoder with a pixel propagation module and a momentum encoder without a propagation module are used for calculation, the distances of two different encoders are shortened, the distance is equivalent to that x in the second image is mapped again to obtain y (similar to the effect of a projection head, nonlinear mapping is performed), and then the loss is calculated by y and x' in the momentum encoder.
As a further description of the above technical solution:
in the step S5, two enhancement modes of contrast learning are set, one is the improved RandAugment, the other is the enhancement mode in the original Simclr, and different enhancement modes can explore more proper changes in the contrast learning.
As a further description of the above technical solution:
in S6, the skipped connection between the encoder and decoder combines the low-level feature map with the high-level feature map, resulting in a more accurate segmentation result.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. in the invention, the encoder is trained through contrast learning to replace the original pre-trained ResNet, and an improved pathological image segmentation model can be brought in the aspect of feature extraction performance without additional annotation.
2. The problem that the traditional contrast learning is inconsistent in downstream segmentation tasks based on instance level Loss is solved by introducing the contrast learning of pixel level Loss.
3. The positive characteristic construction of the traditional contrast learning is designed for natural images and is different from pathological images, so that the invention provides a positive example construction method which is more suitable for the pathological image field.
To sum up:
1. by introducing pixel contrast learning, the model can be better adapted to downstream segmentation tasks.
2. The method is used for constructing the positive example with WSIRANdAug, and solves the problem that the traditional contrast learning is used for constructing the positive example aiming at the natural image.
Drawings
FIG. 1 is a network flow chart of a rectal cancer pathological image segmentation method based on pixel contrast learning;
fig. 2 is a network structure diagram of a rectal cancer pathological image segmentation method based on pixel contrast learning.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, the present invention provides a technical solution: a rectal cancer pathological image segmentation method based on pixel contrast learning specifically comprises the following steps:
s1, pixel contrast learning: two subgraphs are randomly cut from an image, and pixel characteristics corresponding to the subgraphs are kept consistent, wherein in the S1, the aim is to enable different subgraphs with similar distances to draw mapping characterization for learning;
s2, a pair of pictures is subjected to two different enhancements and then is input into an encoder network and a momentum encoder network in a model, wherein the encoder network and the momentum encoder network are both formed by connecting a projection head (two 1x1 convolutions with a batch normalization and a Relu activation layer interposed therebetween) by ResNet, and in S2, unlike a traditional contrast learning mode, pixel contrast learning can map features into a feature map mode, and the traditional method maps features into a vector;
s3, randomly cutting out two sub-feature images from the feature images of the two different enhanced images, and then calculating the consistency loss between the two sub-images so as to pull up the mapping representation distance;
s4, based on S3, the spatial distance judgment is also reflected, namely, the spatial distance of the pixel set between the two characteristic subgraphs is calculated, if the distance between the two characteristic subgraphs exceeds a set threshold valueThe loss is not calculated and if the threshold is not exceeded, it is prevented that all pixels in the image will eventually fall to a value, as shown in the formula:
in this embodiment, in the S1-S4, the pixel contrast learning provides two losses, one isFor calculating pixel loss between subgraphs, the other is +.>For calculating the loss after projection through pixel propagation, < >>Similar to example contrast learning, the formula is as follows:
where i represents a pixel in both subgraphs,representing positive and negative pixel sets, x, respectively, located in the second sub-image i ,x′ j For the pixel characteristic vector in the two subgraphs, τ is a scalar value of 0.3, the loss is calculated twice on the shared pixels of the first subgraph and the second subgraph, and finally the average is taken;
the saidOne is a conventional encoder with a pixel propagation module for generating smooth features; the other is a momentum encoder without a propagation module. The calculation is performed with a conventional encoder with a pixel propagation module and a momentum encoder without a propagation module, pulling the distances of the two different encoders. The method is equivalent to mapping x in the second graph again to obtain y (similar to the effect of a projection head, performing nonlinear mapping), and then calculating the loss by using y and x' in a momentum encoder;
by introducing both losses, the upstream and downstream tasks are made more consistent.
The whole loss is composed of three parts, and besides the two traditional example contrast learning losses:
wherein the method comprises the steps ofThe learning penalty is compared for the conventional example.
S5, introducing a new image enhancement mode: introducing RandAugment, adding the original enhancement mode into the RandAugment together, limiting the enhancement strength and the number of enhancement methods used by the properties of the enhancement mode, and avoiding the defect of excessive enhancement modes, wherein in S5, two enhancement modes of contrast learning are set, one is the improved RandAugment, the other is the enhancement mode in the original Simclr, and the different enhancement modes can be used for exploring more proper changes in the contrast learning;
s6, image segmentation: performing feature extraction on the image by using an encoder and then performing segmentation mask prediction on the image by using a decoder, wherein in S6, a skip connection between the encoder and the decoder combines the low-level feature map with the high-level feature map, thereby generating a more accurate segmentation result;
and S7, the encoder part transfers the characteristic encoder obtained by contrast learning training to an encoder module of the U-Net to replace the original image Net pre-trained encoder, and then inputs the pathological image slice into a network to finish segmentation.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.
Claims (7)
1. The rectal cancer pathological image segmentation method based on pixel contrast learning is characterized by comprising the following steps of:
s1, pixel contrast learning: randomly cutting two subgraphs from an image, and keeping the pixel characteristics corresponding to the subgraphs consistent;
s2, a pair of pictures is subjected to two different enhancements and then is input into an encoder network and a momentum encoder network in a model, wherein the encoder network and the momentum encoder network are both formed by connecting a projection head (two 1x1 convolutions and a batch normalization and Relu activation layer is interposed between the two projection heads) by ResNet;
s3, randomly cutting out two sub-feature images from the feature images of the two different enhanced images, and then calculating the consistency loss between the two sub-images so as to pull up the mapping representation distance;
s4, based on S3, the spatial distance judgment is also reflected, namely, the spatial distance of the pixel set between the two characteristic subgraphs is calculated, if the distance between the two characteristic subgraphs exceeds a set threshold valueThe loss is not calculated and if the threshold is not exceeded, the situation that all pixels in the image are finally attributed to one value can be prevented by calculating the loss;
s5, introducing a new image enhancement mode: the method has the advantages that the RandAugment is introduced, the original enhancement mode is added into the RandAugment, the property of the RandAugment limits the enhancement strength and the number of enhancement methods used, and the defect of excessive enhancement modes is avoided;
s6, image segmentation: extracting features of the image by using an encoder, and then performing segmentation mask prediction on the image by using a decoder;
and S7, the encoder part transfers the characteristic encoder obtained by contrast learning training to an encoder module of the U-Net to replace the original image Net pre-trained encoder, and then inputs the pathological image slice into a network to finish segmentation.
2. The method for segmenting a pathological image of rectal cancer based on pixel contrast learning according to claim 1, wherein in S1, the purpose is to make different sub-pixels close to each other in distance map representation for learning.
3. The method for segmenting a pathological image of rectal cancer based on pixel contrast learning according to claim 1, wherein in S2, unlike the conventional contrast learning method, the pixel contrast learning maps features into a feature map form, and the conventional method maps features into a vector.
4. The method for segmenting a pathological image of rectal cancer based on pixel contrast learning according to claim 1, wherein in the S1-S4, the pixel contrast learning proposes two losses, one is thatFor calculating pixel loss between subgraphs, the other is +.>For calculating the loss after projection through pixel propagation.
5. The method for segmenting a pathological image of rectal cancer based on pixel contrast learning according to claim 4, wherein the steps ofOne is a conventional encoder with a pixel propagation module for generating smooth features; the other is a momentum encoder without a propagation module, a conventional encoder with a pixel propagation module and a momentum encoder without a propagation module are used for calculation, the distances of two different encoders are shortened, the distance is equivalent to that x in the second image is mapped again to obtain y (similar to the effect of a projection head, nonlinear mapping is performed), and then the loss is calculated by y and x' in the momentum encoder.
6. The method for segmenting a pathological image of rectal cancer based on pixel contrast learning according to claim 1, wherein in S5, two enhancement modes of contrast learning are set, one is an improved randagament and the other is an original enhancement mode in Simclr, and different enhancement modes can be used for exploring more proper changes in contrast learning.
7. A method of segmentation of a pathological image of rectal cancer based on pixel contrast learning according to claim 1, wherein in S6, the skipped connection between encoder and decoder combines the low-level feature map with the high-level feature map, thereby yielding a more accurate segmentation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210371764.6A CN114926394B (en) | 2022-04-11 | 2022-04-11 | Colorectal cancer pathological image segmentation method based on pixel contrast learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210371764.6A CN114926394B (en) | 2022-04-11 | 2022-04-11 | Colorectal cancer pathological image segmentation method based on pixel contrast learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926394A CN114926394A (en) | 2022-08-19 |
CN114926394B true CN114926394B (en) | 2024-04-05 |
Family
ID=82805421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210371764.6A Active CN114926394B (en) | 2022-04-11 | 2022-04-11 | Colorectal cancer pathological image segmentation method based on pixel contrast learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926394B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118506111B (en) * | 2024-07-18 | 2024-10-29 | 云南大学 | Hyperspectral image classification method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014144103A1 (en) * | 2013-03-15 | 2014-09-18 | Sony Corporation | Characterizing pathology images with statistical analysis of local neural network responses |
CN113379764A (en) * | 2021-06-02 | 2021-09-10 | 厦门理工学院 | Pathological image segmentation method based on domain confrontation self-supervision learning |
-
2022
- 2022-04-11 CN CN202210371764.6A patent/CN114926394B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014144103A1 (en) * | 2013-03-15 | 2014-09-18 | Sony Corporation | Characterizing pathology images with statistical analysis of local neural network responses |
CN113379764A (en) * | 2021-06-02 | 2021-09-10 | 厦门理工学院 | Pathological image segmentation method based on domain confrontation self-supervision learning |
Non-Patent Citations (2)
Title |
---|
全卷积神经网络下的多光谱遥感影像分割;姚建华;吴加敏;杨勇;施祖贤;;中国图象图形学报;20200116(第01期);全文 * |
面向乳腺超声图像分割的混合监督双通道反馈U-Net;贡荣麟;施俊;王骏;;中国图象图形学报;20201016(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114926394A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110428432B (en) | Deep neural network algorithm for automatically segmenting colon gland image | |
CN109493346B (en) | Stomach cancer pathological section image segmentation method and device based on multiple losses | |
CN109711413B (en) | Image semantic segmentation method based on deep learning | |
CN115620010A (en) | Semantic segmentation method for RGB-T bimodal feature fusion | |
CN113706545A (en) | Semi-supervised image segmentation method based on dual-branch nerve discrimination dimensionality reduction | |
CN110738132B (en) | Target detection quality blind evaluation method with discriminant perception capability | |
CN114821052B (en) | Three-dimensional brain tumor nuclear magnetic resonance image segmentation method based on self-adjustment strategy | |
CN116563204A (en) | Medical image segmentation method integrating multi-scale residual attention | |
CN113807340B (en) | Attention mechanism-based irregular natural scene text recognition method | |
CN113763406B (en) | Infant brain MRI (magnetic resonance imaging) segmentation method based on semi-supervised learning | |
CN115311194A (en) | Automatic CT liver image segmentation method based on transformer and SE block | |
CN116228792A (en) | Medical image segmentation method, system and electronic device | |
CN113569724A (en) | Road extraction method and system based on attention mechanism and dilation convolution | |
CN114926394B (en) | Colorectal cancer pathological image segmentation method based on pixel contrast learning | |
CN115830054A (en) | Crack image segmentation method based on multi-window high-low frequency visual converter | |
CN115311605A (en) | Semi-supervised video classification method and system based on neighbor consistency and contrast learning | |
CN113450363B (en) | Meta-learning cell nucleus segmentation system and method based on label correction | |
CN114332122A (en) | Cell counting method based on attention mechanism segmentation and regression | |
CN117746045B (en) | Method and system for segmenting medical image by fusion of transducer and convolution | |
CN113269788B (en) | Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image | |
Lan et al. | Physical-model guided self-distillation network for single image dehazing | |
CN115147605A (en) | Tongue image segmentation method based on information loss region detection mechanism | |
CN115331011A (en) | Optic disc dividing method based on convolution nerve network | |
CN115205624A (en) | Cross-dimension attention-convergence cloud and snow identification method and equipment and storage medium | |
CN114463346A (en) | Complex environment rapid tongue segmentation device based on mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |