CN117557840A - A method for grading fundus lesions based on small sample learning - Google Patents
A method for grading fundus lesions based on small sample learning Download PDFInfo
- Publication number
- CN117557840A CN117557840A CN202311491052.9A CN202311491052A CN117557840A CN 117557840 A CN117557840 A CN 117557840A CN 202311491052 A CN202311491052 A CN 202311491052A CN 117557840 A CN117557840 A CN 117557840A
- Authority
- CN
- China
- Prior art keywords
- network
- fundus
- sample
- samples
- meta
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003902 lesion Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 41
- 230000006870 function Effects 0.000 claims description 24
- 239000010410 layer Substances 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 230000002207 retinal effect Effects 0.000 claims description 5
- 239000002356 single layer Substances 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 230000008034 disappearance Effects 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000013480 data collection Methods 0.000 claims description 2
- 238000005303 weighing Methods 0.000 claims 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 9
- 201000010099 disease Diseases 0.000 abstract description 8
- 238000005286 illumination Methods 0.000 abstract description 8
- 238000007781 pre-processing Methods 0.000 abstract description 6
- 210000001525 retina Anatomy 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 206010012689 Diabetic retinopathy Diseases 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000003707 image sharpening Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 208000006069 Corneal Opacity Diseases 0.000 description 1
- 208000010412 Glaucoma Diseases 0.000 description 1
- 206010019280 Heart failures Diseases 0.000 description 1
- 208000032382 Ischaemic stroke Diseases 0.000 description 1
- 208000018737 Parkinson disease Diseases 0.000 description 1
- 208000035977 Rare disease Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 231100000269 corneal opacity Toxicity 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002911 mydriatic effect Effects 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 230000004770 neurodegeneration Effects 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 208000014733 refractive error Diseases 0.000 description 1
- 238000013517 stratification Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a fundus disease grading method based on small sample learning, which comprises the steps of firstly, collecting and preprocessing retina fundus color photographs to obtain the existing fundus disease data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of fundus lesions by the image data through a contrast network and a meta-network which are subjected to pre-training and meta-training; and finally, carrying out similarity scoring on the unlabeled image and each level fundus lesion prototype so as to carry out grading prediction of fundus lesions. According to the invention, fundus lesions can be classified better by self-learning fundus lesions and fundus illumination with a small number of samples through a dual-network structure comprising a comparison network and a meta-network after meta-training, and the fundus lesions can be classified on the basis of effectively reducing noise influence in both aspects of a feature space and a label space, so that the accuracy of fundus lesions prediction is improved.
Description
Technical Field
The invention relates to a fundus lesion grading method based on image data, in particular to a fundus lesion grading method based on small sample learning, and belongs to the technical field of medical image processing and computer vision.
Background
In recent years, deep learning techniques have been widely used in various fields such as computer vision and the like and have achieved remarkable effects. The high accuracy of this is highly dependent on large scale marking data, which is not always available in real life for large amounts of tagged data, e.g. in the medical field. Taking fundus disease prediction as an example, various diseases such as diabetic retinopathy, glaucoma, transformation of contralateral eyes to neovascular AMD within one year, cardiovascular diseases (ischemic stroke, myocardial infarction, heart failure) and neurodegenerative diseases (Parkinson's disease) and the like can be diagnosed and predicted by fundus illumination. Early prediction results may allow timely intervention, especially for patients with chronic diseases, such as diabetics, and early disease prediction may help prevent serious complications. However, the retinal fundus color photograph is difficult to obtain in various technologies and physiology, so that the problems of pupil size, patient cooperation, corneal opacity, refractive error and the like can lead to incapability of collecting pictures, and the quality of used equipment and lenses is important for obtaining high-quality fundus color photographs. Low quality equipment or lenses may cause distortion and distortion of images, while high quality fundus photographing equipment is generally expensive and has limited medical resources, so that tagged fundus lesion data used as a training sample is relatively less, a traditional deep learning method is difficult to use, and further, the acquired fundus color photographs are difficult to judge and screen diseases, and a great deal of manpower and time are required.
Small sample learning, which may imitate humans using few examples in tasks to identify new classes, is of increasing interest due to the high cost and effort of collecting large amounts of data. The purpose of the small sample learning is to perform quick learning by only a small number of marked data samples, so that the small sample learning has good generalization performance on new tasks. However, most of the existing small sample learning methods are based on the assumption that label information is completely clean and complete, and the robustness of a model facing a noise label is not considered, in fact, the noise label is ubiquitous due to limited knowledge and unintentional damage in medical images, so that the problem of low accuracy of a prediction result exists in fundus lesion prediction classification by utilizing the existing small sample learning methods. Taking grading of a sugar network (short for diabetic retinopathy) as an example, on one hand, grading standards are easy to be confused when grading samples with the same judging standard and different grades, for example, in light NPDR and medium NPDR, the judgment can be made by virtue of microangioma, but the difference is that the degree is light and heavy, so that the fundus color photograph angle is easy to identify errors for samples like transition from light NPDR to medium NPDR; on the other hand, because the fundus color photograph of the patient possibly contains unfamiliar or rare disease features of the expert, and reasons such as photographing angles, data transmission, damage and the like of the fundus color photograph exist, the expert often generates deviation in grading labeling, and further the noise data of labeling samples are generated, and the existence of noise often causes large model deviation, so that a prediction result is seriously influenced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a fundus lesion grading method based on small sample learning, which can realize fundus lesion grading on the basis of effectively reducing noise influence in both a characteristic space and a label space, thereby improving the accuracy of fundus lesion prediction.
In order to achieve the purpose, the fundus lesion grading method based on small sample learning specifically comprises the following steps:
step1, collecting retinal fundus color photographs from a data source, deriving pictures after the data acquisition is completed, and marking all the pictures by a professional doctor, and grading according to international clinical grading standards of fundus lesions;
step2, selecting data set after processing the eye bottom color photograph to improve image quality, reduce noise and normalize image, and selecting K samples for each level as support setQuery set->Wherein y is i ∈C novel K is the number of samples extracted for each fundus lesion level, M is the number of samples of query set Q;
at the same time, an auxiliary data set with rich samples and accurate labels is definedWherein y is i ∈C base Requirement C base ∩C novel Let phi, also divide the auxiliary data set into support sets S b And query set Q b The meta-training device is used for performing meta-training;
step3, constructing a comparison network for generating intra-class weights, and learning intra-class characteristics of each level of fundus lesions after pre-training and meta-training;
step4, constructing a meta-network for generating inter-class weights, and learning inter-class characteristics of each level of fundus lesions after pre-training and meta-training;
step5, correcting the eye fundus lesion prototype by using the sample weights extracted by the comparison network and the meta-network;
step6, carrying out similarity scoring on the unlabeled images and each grade fundus lesion prototype so as to carry out grading prediction of fundus lesions.
Further, step3 specifically comprises the following steps:
step3-1, pre-training contrast network g ξ Mapping the sample vector into a certain feature space through a feature extraction network;
step3-2, meta-training stage for a more widely sourced medical dataset D b K samples under each category and K samples classified by each fundus lesion in the query stage are calculated according to cosine similarity, and the similarity between every two samples is calculated according to the following calculation formula:
cor(x a ,x b )=cos(g ξ (x a ),g ξ (x b ))
wherein: g ξ Representing a pre-trained contrast network feature extractor, x a ,x b Respectively representing two support samples;
obtaining a K x K correlation matrix Z for a rank n n ∈R K×K Each correlation matrix Z n The method comprises the steps of including relevant information between a certain support sample and the rest K-1 samples of the same level, wherein the information is scattered on other K-1 relevant characteristics at the same time;
step3-3, correlation matrix Z n Direct input transducer layer:
wherein: phi (phi) T Z is a transducer layer parameter n Represents a correlation matrix, o n Representing an output result;
then using K sample softmax function to calculate the weight vector V in the class corresponding to each sample n :
V n =ρ(o n )
A vector is generated for each sample that characterizes the weights within the class.
Further, step3-1 is specifically as follows:
step3-1-1, feature extractor pre-training: given feature extractor f θ (. Cndot.) prototype for each level k is calculated as
Wherein:the number of the support samples with the fundus lesion level k; x is x t Representing a support sample; y is t Representing sample x t A corresponding fundus lesion level label;
new sample x for a given set of queries q The classifier outputs a normalized classification score for each class k
Wherein: sim (sim) w (. Cndot.) is a similarity function; f (f) θ (. Cndot.) is a feature extractor, x q To query the sample, c k A prototype of class k;
sim w (. Cndot.) the following calculation was used
sim w (A,B)=λcos(F w (A),F w (B))
Wherein: f (F) w (.) is a w-parameterized single-layer neural network; lambda is the inverse temperature parameter;
the θ and ω are updated by the following loss function
Wherein: x is x q ,y q All derived from the query sample; p is p θ,ω Normalized scores for the corresponding categories; e represents a mathematical expectation;
step3-1-2, contrast web pretraining:
contrasting network g ξ Training with condition loss and contrast loss, giving input fundus color photograph x, generating two images x 'and x' processed by different data enhancement methods, sequentially placing the two enhanced images into contrast feature extraction network g ξ And a projected multi-layer perceptron head sigma for comparing the network g ξ The extracted original features are embedded for further projection and transformation, and finally, the mapped result after sigma is put into a predictive multi-layer perception machine head delta to obtain the sine and cosine similarity of two embedded vectors:
wherein: I.I 2 Represents the l2 norm; stop-gradient is stop gradient operation;
at the same time, the condition loss is utilized to guide the learning of the comparison network, and the comparison network is composed of a feature extractor f θ (. Cndot.) guide learning:
combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
Further, step4 specifically comprises the following steps:
step4-1, initializing a meta learning networkFor extracting support set features;
step4-2, firstly, for all support samples, carrying out K-Means clustering according to a class prototype of the support sample as an initial cluster center until convergence;
step4-3, calculating cosine similarity between the final clustering center and all samples to obtain a similarity matrix;
step4-4, obtaining a certain support sample x by using a softmax function t The inter-class weights for a certain fundus lesion classification are:
wherein:representing sample x t Similarity score for fundus lesion level n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Further, step5 specifically comprises the following steps:
step5-1, prototype correction in comparison network is:
wherein: α+β=1;is an inter-class weight; />Is an intra-class weight; g ξ Representing a comparison network feature extractor; x is x t Represents the t-th sample; />A kth sample representing a class n;
step5-2, prototype modification in the meta-network is:
wherein: α+β=1;is an inter-class weight; />Is an intra-class weight; />Representing a meta-network feature extractor; x is x t Represents sample t, ++>Represents the kth sample of class n.
Further, step6 specifically comprises the following steps:
step6-1, define the EC similarity score as follows:
wherein: x represents a query sample; c n A sample center point representing the fundus lesion level; g ξ Representing a comparison network feature extractor;representing a meta-network feature extractor; p is p n Representing the prototype corrected by the comparison network; p's' n Representing the prototype after the correction of the meta-network;
step6-2, calculating EC similarity scores corresponding to various levels of fundus lesions according to a certain query sample x, and obtaining probabilities corresponding to the fundus lesion levels by using a softmax function:
wherein: s is S EC Representing EC similarity scores;representing a query sample; c n A sample center point representing the fundus lesion level; t is a superparameter.
Compared with the prior art, the fundus lesion grading method based on small sample learning has the following advantages:
1. in the fundus disease prediction process, the acquisition difficulty of a data set is high, so that the number of samples trained by an artificial neural network is limited and is insufficient to train a traditional neural network.
2. Because of the problems of high difficulty in obtaining high-quality retina fundus color photographs, unknown diseases affecting the identification of fundus lesions, and the like, the grading labeling of the fundus lesions by an expert may deviate. The invention eliminates unreasonably marked noise images as far as possible by constructing a double-network structure and tries to classify the fundus lesions correctly. The contrast network and the meta network respectively correct the fundus lesion prototype of each level through the intra-class weight and the inter-class weight, so that the characteristic and the label representation capability of the model are improved. Metric level calibration can mitigate the effects of two network mispredictions, improving the stability and robustness of the model.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a block diagram of the data preprocessing of the present invention;
FIG. 3 is an overall framework of a dual network architecture in accordance with the present invention;
FIG. 4 is a schematic diagram of the generation of intra-class weights and inter-class weights in a dual network architecture of the present invention;
FIG. 5 is a schematic diagram of an original modification strategy in a dual network architecture of the present invention;
fig. 6 is a schematic diagram of the comparative network pre-training phase of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings, taking the grading of the lesions of the sugar network as an example.
As shown in fig. 1, firstly, collecting and preprocessing retina fundus color photographs, on one hand, enabling images to adapt to network input, and on the other hand, facilitating subsequent training processes to reduce calculated amount and speed up training, and obtaining an existing sugar network lesion data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of sugar network lesions by the image data through a pre-trained and meta-trained comparison network and a meta-network, and reducing the influence of noise fundus color illumination with labeling deviation; and finally, carrying out similarity scoring on the unlabeled images and each level of sugar net prototype so as to carry out grading prediction of the sugar net. The method comprises the following steps:
step1, data acquisition and labeling
Retinal fundus color illumination is collected from data sources such as diabetics and healthy subjects. The data collection may be performed using sensors, medical devices, or database queries, among other means. The pictures are derived after the data acquisition is completed, and marked by a professional doctor, and the pictures are classified into no obvious retinopathy (grade I), mild NPDR (grade II), moderate NPDR (grade III), severe NPDR (grade IV) and PDR (grade V) according to the new international clinical grading standard of diabetic retinopathy. The specific grading criteria and mydriatic fundus examination are shown in Table 1 below.
Table 1 diabetic retinopathy grading criteria
Step2, data preprocessing
The method is characterized in that data preprocessing is performed before the analysis of the eye fundus color illumination, so that the image quality is improved, noise is reduced, and the image is standardized, and the subsequent analysis is facilitated. As shown in fig. 2, the data preprocessing employs the following steps:
step2-1, image sharpening: by applying the image sharpening filter, edges and details of the image can be enhanced, improving the sharpness of the image. Common sharpening filters include Sobel, laplacian and Gao Sirui sharpening.
Step2-2, contrast enhancement: increasing the contrast of the image may make the lesions more visible. The contrast enhancement method includes histogram equalization and contrast stretching.
Step2-3, denoising: fundus illumination may be subject to various noise, such as light noise, artifacts, and the like. Denoising methods include median filtering, gaussian filtering, and wavelet denoising.
Step2-4, color normalization: the color and brightness of fundus images may be different depending on different photographing conditions, and thus color normalization is required to keep the color and brightness uniform between different images.
Step2-5, removing fundus reflection: the optic disc (central portion of the fundus) in fundus illumination typically introduces large changes in luminance, sometimes requiring removal or alleviation of the effects of this area in order to better analyze other portions of the retina.
Step2-6, image cropping: the image may be cropped to preserve only the region of interest (ROI) as needed for a particular task, thereby reducing the complexity of the process.
Step2-7, standardization of image scale: the images are adjusted to the same size for training and analysis of the deep learning model.
Step2-8, selecting a dataset: k samples are selected for each grade of the sugar net grading five-grade standardTo support a collection, i.eQuery set->Wherein y is i ∈C novel K is the number of samples extracted for each grade of sugar net lesion, and M is the number of samples of query set Q. Each FSL problem with the support set estimating sample categories in the query set can be seen as a task. At the same time, an auxiliary dataset is defined which has a rich sample and is labeled accurately +.>Wherein y is i ∈C base The data set may be a data set derived from other fundus medical tasks, requirement C base ∩C novel =Φ, the auxiliary dataset is also divided into a support set and a query set: s is S b And Q b They are used for meta-training. This strategy can be seen as being in base class C base Training the model on the data of (1) to make the model generalize well to C novel 。
Step3, constructing a comparison network
In order to cope with the interference of noise information on model training effect under small sample learning, a dual-network architecture is constructed, which consists of a comparison network and a meta-network, wherein the comparison network is used for generating weights in classes, and the meta-network is used for generating weights between classes. A specific architecture is shown in fig. 3.
Step3-1, pre-training contrast network g ξ (see Step7 for details) the sample vector is mapped into a feature space through the feature extraction network.
Step3-2, meta-training stage for a more widely sourced medical dataset D b K samples under each category and K samples classified by each sugar net in the query stage are calculated according to cosine similarity, and the similarity between every two samples is calculated according to the following calculation formula:
cor(x a ,x b )=cos(g ξ (x a ),g ξ (x b )).
wherein: g ξ Representing a pre-trained contrast network feature extractor, x a ,x b Respectively representing two support samples;
thus, a K X K correlation matrix Z can be obtained for the rank n n ∈R K×K For example Z 1 The overall correlation between the sample with the sugar network classified as class I and the class I sugar network is disclosed. Modeling can be performed with potential correlation between correlated samples of the same level. Each correlation matrix Z n The method comprises the steps of including the related information between a certain supporting sample and the rest K-1 samples of the same level, and dispersing the information on other K-1 related characteristics. The nature of the interconnect makes it more reasonable to fully consider the context of the relevant features of the same class when generating the intra-class weights.
Step3-3, the self-attention mechanism in the transducer model utilizes support samples to assign weights based on the similarity between them. Specifically, a correlation matrix among samples is obtained after the Step3-2, and a correlation matrix Z is obtained n Direct input transducer layer (without position coding):
wherein: phi (phi) T Z is a transducer layer parameter n Represents a correlation matrix, o n Representing an output result;
then using K sample softmax function to calculate the weight vector V in the class corresponding to each sample n :
V n =ρ(o n )
After steps 3-1, step3-2, step3-3, a vector representing the weights within the class can be generated for each sample. The overall process of specifically generating such internal weights is shown in fig. 4.
Step4, constructing a meta-network
Step4-1, initializing a meta learning networkFor extracting support set features.
Step4-2, firstly, for all the support samples, carrying out K-Means clustering by taking a class prototype of the support sample as an initial cluster center according to the fact that the cluster is five, until convergence. Class prototypes are representative representations generated based on support samples, and the center or average value of each class may be taken as the class prototype in the present invention.
Step4-3, calculating the cosine similarity between the final clustering center and all samples, and consistent with Step 4-2. Five clustering centers can be obtained through Step4-1, cosine similarity between each clustering center and all samples is calculated, and S epsilon R can be finally obtained 25×K Is a similar matrix of (a) for the image.
Step4-4, obtaining a certain support sample x by using a softmax function t (t=1, 2,3., 25K), which is the inter-class weight for a certain sugar net classification:
wherein:representing sample x t Similarity score for sugar network class n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Step5, prototype modification
A frame diagram of the prototype modification strategy is shown in fig. 5.
Step5-1, prototype correction in comparison network is:
wherein: α+β=1;is composed ofstep3-3, obtaining an inter-class weight; />Is an intra-class weight obtained from step 4-4; g ξ Representing a comparison network feature extractor; x is x t Represents the t-th sample; />Represents the kth sample of class n.
Step5-2, prototype modification in the meta-network is:
wherein: α+β=1;the inter-class weight is obtained from step 3-3; />Is an intra-class weight obtained from step 4-4; />Representing a meta-network feature extractor; x is x t Represents sample t, ++>Represents the kth sample of class n.
Step6, similarity score
To mitigate the effects of two network mispredictions, we introduced metric intelligent calibration on both networks, using a method called EC similarity to help calculate the predicted scores for each level of sugar network. As shown in fig. 6, the calculation of a specific EC similarity score includes the following steps:
step6-1, define the EC similarity score as follows:
wherein: x represents a query sample; c n A sample center point representing the sugar network level; g ξ Representing a comparison network feature extractor;representing a meta-network feature extractor; p is p n Representing the prototype corrected by the comparison network; p's' n Representing the prototype after the meta-network modification.
That is, for each level of the sugar network disorder, the incoming query sample x (i.e., retinal fundus color illumination) passes through the comparison network g ξ And meta-networkAfter mapping of (a), the corresponding EC score can be calculated.
Step6-2, calculating EC similarity scores corresponding to the I/II/III/IV/V grade sugar net lesions according to a certain query sample x, and obtaining probabilities corresponding to the sugar net lesion grades by using a softmax function:
wherein: s is S EC Representing EC similarity scores;representing a query sample; c n A sample center point representing the fundus lesion level; t is a superparameter.
Step7, model Pre-training
Step7-1, feature extractor pre-training: a prototype for each class may be computed by a feature extractor. Given feature extractor f θ (. Cndot.) prototypes for each level k can be calculated as
Wherein:the number of the support samples with the sugar net level of k; f (f) θ Implemented by Conv-4-64 backbone network, x t Representing a support sample; y is t Representing sample x t Corresponding labels, i.e. samples x t Is a sugar network grade of (2);
new sample x for a given set of queries q The classifier outputs a normalized classification score for each class k
Wherein: sim (sim) w (. Cndot.) is a similarity function; f (f) θ (. Cndot.) is a feature extractor, x q To query the sample, c k Is a prototype of class k.
sim w (. Cndot.) the following calculation was used
sim w (A,B)=λcos(F w (A),F w (B))
Wherein: f (F) w (.) is a w-parameterized single-layer neural network, the output dimension is 2048, and λ is the inverse temperature parameter.
The θ and ω are updated by the following loss function
Wherein: x is x q ,y q All derived from the query sample, p θ,ω For the normalized score of the corresponding category, the calculation method is as above; e represents a mathematical expectation.
Step7-2, comparative web pre-training:
contrasting network g ξ Training can be performed with conditional and comparative losses. Given an input fundus color photograph x, two images x 'and x' processed by different data enhancement methods are generated, and the two enhanced images are sequentially put into a contrast feature extraction network g ξ And a projected multi-layer perceptron header sigma for extracting the extracted features from the feature extractor (i.e. the contrast feature extraction network g ξ ) The extracted original features are embedded for further projection and transformation. Finally, the result mapped after sigma is put into a predictive multi-layer perception machine head delta, and then the sine and cosine similarity of two embedded vectors can be obtained:
wherein: I.I 2 Represents the l2 norm; stop-gradient is a stop gradient operation that is commonly used in the construction of certain loss functions, where certain parameters should not be updated according to the gradient, but should remain unchanged.
At the same time, the condition loss is utilized to guide the learning of the comparison network, and the comparison network is composed of a feature extractor f θ (. Cndot.) (see Step 7-1) guide learning:
combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
f θ Realized by Conv-4-64 backbone network, the projection multi-layer perception machine head sigma is formed by a single-layer neural network F w (-) single-layer neural network and one multi-layer neural network (each layer of neural network has {1600,2048,2048} units and is subjected to batch normalization). The predictive multi-layer perceptron head delta is parameterized by a three-layer neural network having 512 hidden units and batch normalized at the hidden layers, all using the ReLU function as the activation function.
Step8, model element training
The training set used in the meta-training phase is derived from the broader medical data set D b It requires no sample of the used set of sugar network lesions, i.e. C base ∩C novel =φ,D b The number of categories included is N. Step1 to Step7 show how the model recognizes the grading of the sugar network (i.e. in case of n=5), for a medical dataset D of more extensive origin b The steps are consistent.
Step8-1, define meta-loss:
wherein: m is the total number of samples of the query set,
in dataset D b The artificial noise is defined by defining data set D b The noise loss in the above class is as follows:
wherein whenFor artificially induced noise +.>Otherwise, 0.
For all support set samples, constructing a similarity matrix M, calculating the similarity between any two samples, and defining the inter-class loss as:
wherein,
l(x (i) ) Represents x (i) Is a true value for the tag.
The final loss function is defined as:
L total =L me +ηL ra +γL er
wherein: η and γ are normal numbers representing the importance of the different losses.
Step9, diabetes stratification
The existing marked sugar net graded picture (support set S n Derived from C novel ) Put into the network, the new sugar network dataset will learn itself for the network model that has been trained in step 8.
Finally, when grading prediction is carried out on single fundus color photographs, the EC similarity is used for measuring the score of each grade, and the grade with the highest score is obtained as a final prediction label:
the probability of each level of sugar net symptom can be calculated using the following formula:
the label distribution of the symptoms of the sugar network of each grade is calculated.
The method can be implemented by using a hardware platform such as a computer, a server, a mobile device and the like, wherein data processing and model training are involved. The method may also incorporate real-time monitoring equipment and a patient database to continuously track and update the grading results. Contrast network in denoising networkAnd element networkCollaterals g ξ All can be implemented by ConvNet model (C64E) backbone. In C64E, each block consists of 64-channel 3×3 convolution, batch normalization, reLU nonlinearity, and 2×2 max pooling. The feature embedding dimension is set to 1600.
The fundus lesion grading method based on small sample learning removes the data with marking deviation in fundus color photographs through a double-network structure formed by a comparison network and a meta-network, and performs fundus lesion grading prediction on fundus color photographs in query concentration on the basis. The dual network architecture consisting of the comparison network and the meta network is calibrated in two ways to eliminate data noise: example level calibration and metric level calibration. In the aspect of example-level calibration, the prototype is corrected by utilizing the sample weights extracted by two networks, so that the relation between sugar net samples of each level and the whole sugar net samples of different levels can be better captured, the distinguishing capability of the model on the samples of different levels is improved, the corrected prototype can more accurately represent the characteristic and label information of each sugar net level, the performance of the model in the classification learning task of the sugar net with fewer samples is improved, and the characteristic representation capability and label representation capability of the model are improved. In terms of metric level calibration, ensemble with Consistency (EC) principle is introduced, similarity between two examples is calculated by fusing similarity evaluation results in two different networks, EC similarity can adjust confidence of similarity prediction according to consistency of the similarity in the two networks and scale similarity prediction scores, similarity between two sugar net samples can be evaluated more accurately by using EC similarity, specifically, the similarity score of each sugar net level can be scaled implicitly, the prediction scores can be calibrated adaptively by calculating consistency of the similarity scores of the two network predictions, and the metric level calibration can alleviate influence of two network mispredictions and improve stability and robustness of a model.
The artificial neural network comprises a pre-training module, adopts a self-supervision learning method, utilizes supervision information in marked data, shapes and improves self-supervision learning feature manifolds under the condition of no auxiliary unmarked data, reduces characterization deviation, and mines more effective semantic information. When the comparison network is pre-trained, the feature extractor is firstly learned on the labeling data through the original supervised learning method, and the prototype of each category is calculated, namely, the prototype of each sugar network level is calculated. Next, a conditional self-monitoring model is trained using the self-monitoring module and the condition module. The self-supervision module generates two different enhancement views by using a random enhancement method, and calculates the similarity loss between the embedded vectors, namely the similarity loss between the sugar net lesion images of different visual angles of the same patient. The condition module uses the features learned in the pre-training stage as priori knowledge, namely uses the prototype representation of each learned sugar net level as guidance, optimizes the feature manifold learned by the self-supervision module, and enables the model to extract more semantic information in combination with the multiparty view angle in the comparison network so as to obtain better representation.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311491052.9A CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311491052.9A CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117557840A true CN117557840A (en) | 2024-02-13 |
CN117557840B CN117557840B (en) | 2024-05-24 |
Family
ID=89817778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311491052.9A Active CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117557840B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935030A (en) * | 2024-03-22 | 2024-04-26 | 广东工业大学 | Multi-label confidence calibration method and system based on dual-view correlation-aware regularization |
CN118411573A (en) * | 2024-07-01 | 2024-07-30 | 苏州大学 | An automatic classification method and system for rare fundus diseases based on OCT images |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615051A (en) * | 2018-04-13 | 2018-10-02 | 博众精工科技股份有限公司 | Diabetic retina image classification method based on deep learning and system |
CN110969191A (en) * | 2019-11-07 | 2020-04-07 | 吉林大学 | Prediction method of glaucoma prevalence probability based on similarity-preserving metric learning method |
CN111639679A (en) * | 2020-05-09 | 2020-09-08 | 西北工业大学 | Small sample learning method based on multi-scale metric learning |
CN111858991A (en) * | 2020-08-06 | 2020-10-30 | 南京大学 | A Few-Sample Learning Algorithm Based on Covariance Metrics |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
CN113361612A (en) * | 2021-06-11 | 2021-09-07 | 浙江工业大学 | Magnetocardiogram classification method based on deep learning |
CN113537305A (en) * | 2021-06-29 | 2021-10-22 | 复旦大学 | Image classification method based on matching network less-sample learning |
EP3944185A1 (en) * | 2020-07-23 | 2022-01-26 | INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência | Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images |
CN114022766A (en) * | 2021-11-04 | 2022-02-08 | 江苏农林职业技术学院 | Tea typical disease image recognition system and method based on small sample learning |
CN114283355A (en) * | 2021-12-06 | 2022-04-05 | 重庆邮电大学 | A Multi-target Endangered Animal Tracking Method Based on Few-Sample Learning |
CN114494195A (en) * | 2022-01-26 | 2022-05-13 | 南通大学 | Few-Shot Attention Mechanism Parallel Siamese Approach for Fundus Image Classification |
CN114898158A (en) * | 2022-05-24 | 2022-08-12 | 杭州电子科技大学 | Small-sample traffic anomaly image acquisition method and system based on multi-scale attention coupling mechanism |
CN115019089A (en) * | 2022-05-30 | 2022-09-06 | 中科苏州智能计算技术研究院 | A Two-Stream Convolutional Neural Network for Few-Sample Learning |
CN115170868A (en) * | 2022-06-17 | 2022-10-11 | 湖南大学 | Clustering-based small sample image classification two-stage meta-learning method |
CN115359294A (en) * | 2022-08-23 | 2022-11-18 | 上海交通大学 | Cross-granularity small sample learning method based on similarity regularization intra-class mining |
CN115458174A (en) * | 2022-09-20 | 2022-12-09 | 吉林大学 | A method for constructing an intelligent diagnosis model of diabetic retinopathy |
CN115731411A (en) * | 2022-10-27 | 2023-03-03 | 西北工业大学 | A Few-Sample Image Classification Method Based on Prototype Generation |
CN115910385A (en) * | 2022-11-28 | 2023-04-04 | 中科院成都信息技术股份有限公司 | Pathological degree prediction method, system, medium, equipment and terminal |
WO2023056681A1 (en) * | 2021-10-09 | 2023-04-13 | 北京鹰瞳科技发展股份有限公司 | Method for training multi-disease referral system, multi-disease referral system and method |
CN116503668A (en) * | 2023-05-18 | 2023-07-28 | 西安交通大学 | Medical image classification method based on small sample element learning |
CN116525075A (en) * | 2023-04-27 | 2023-08-01 | 四川师范大学 | Method and system for computer-aided diagnosis of thyroid nodules based on few-sample learning |
CN116529762A (en) * | 2020-10-23 | 2023-08-01 | 基因泰克公司 | Multimodal map atrophic lesion segmentation |
CN116612335A (en) * | 2023-07-18 | 2023-08-18 | 贵州大学 | A Few-Sample Fine-grained Image Classification Method Based on Contrastive Learning |
CN116824212A (en) * | 2023-05-11 | 2023-09-29 | 杭州聚秀科技有限公司 | Fundus photo classification method based on small sample learning |
CN116883157A (en) * | 2023-09-07 | 2023-10-13 | 南京大数据集团有限公司 | Small sample credit assessment method and system based on metric learning |
US20230342935A1 (en) * | 2020-10-23 | 2023-10-26 | Genentech, Inc. | Multimodal geographic atrophy lesion segmentation |
-
2023
- 2023-11-10 CN CN202311491052.9A patent/CN117557840B/en active Active
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615051A (en) * | 2018-04-13 | 2018-10-02 | 博众精工科技股份有限公司 | Diabetic retina image classification method based on deep learning and system |
US20200234445A1 (en) * | 2018-04-13 | 2020-07-23 | Bozhon Precision Industry Technology Co., Ltd. | Method and system for classifying diabetic retina images based on deep learning |
CN110969191A (en) * | 2019-11-07 | 2020-04-07 | 吉林大学 | Prediction method of glaucoma prevalence probability based on similarity-preserving metric learning method |
CN111639679A (en) * | 2020-05-09 | 2020-09-08 | 西北工业大学 | Small sample learning method based on multi-scale metric learning |
EP3944185A1 (en) * | 2020-07-23 | 2022-01-26 | INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência | Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images |
CN111858991A (en) * | 2020-08-06 | 2020-10-30 | 南京大学 | A Few-Sample Learning Algorithm Based on Covariance Metrics |
CN116529762A (en) * | 2020-10-23 | 2023-08-01 | 基因泰克公司 | Multimodal map atrophic lesion segmentation |
US20230342935A1 (en) * | 2020-10-23 | 2023-10-26 | Genentech, Inc. | Multimodal geographic atrophy lesion segmentation |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
CN113361612A (en) * | 2021-06-11 | 2021-09-07 | 浙江工业大学 | Magnetocardiogram classification method based on deep learning |
CN113537305A (en) * | 2021-06-29 | 2021-10-22 | 复旦大学 | Image classification method based on matching network less-sample learning |
WO2023056681A1 (en) * | 2021-10-09 | 2023-04-13 | 北京鹰瞳科技发展股份有限公司 | Method for training multi-disease referral system, multi-disease referral system and method |
CN114022766A (en) * | 2021-11-04 | 2022-02-08 | 江苏农林职业技术学院 | Tea typical disease image recognition system and method based on small sample learning |
CN114283355A (en) * | 2021-12-06 | 2022-04-05 | 重庆邮电大学 | A Multi-target Endangered Animal Tracking Method Based on Few-Sample Learning |
CN114494195A (en) * | 2022-01-26 | 2022-05-13 | 南通大学 | Few-Shot Attention Mechanism Parallel Siamese Approach for Fundus Image Classification |
CN114898158A (en) * | 2022-05-24 | 2022-08-12 | 杭州电子科技大学 | Small-sample traffic anomaly image acquisition method and system based on multi-scale attention coupling mechanism |
CN115019089A (en) * | 2022-05-30 | 2022-09-06 | 中科苏州智能计算技术研究院 | A Two-Stream Convolutional Neural Network for Few-Sample Learning |
CN115170868A (en) * | 2022-06-17 | 2022-10-11 | 湖南大学 | Clustering-based small sample image classification two-stage meta-learning method |
CN115359294A (en) * | 2022-08-23 | 2022-11-18 | 上海交通大学 | Cross-granularity small sample learning method based on similarity regularization intra-class mining |
CN115458174A (en) * | 2022-09-20 | 2022-12-09 | 吉林大学 | A method for constructing an intelligent diagnosis model of diabetic retinopathy |
CN115731411A (en) * | 2022-10-27 | 2023-03-03 | 西北工业大学 | A Few-Sample Image Classification Method Based on Prototype Generation |
CN115910385A (en) * | 2022-11-28 | 2023-04-04 | 中科院成都信息技术股份有限公司 | Pathological degree prediction method, system, medium, equipment and terminal |
CN116525075A (en) * | 2023-04-27 | 2023-08-01 | 四川师范大学 | Method and system for computer-aided diagnosis of thyroid nodules based on few-sample learning |
CN116824212A (en) * | 2023-05-11 | 2023-09-29 | 杭州聚秀科技有限公司 | Fundus photo classification method based on small sample learning |
CN116503668A (en) * | 2023-05-18 | 2023-07-28 | 西安交通大学 | Medical image classification method based on small sample element learning |
CN116612335A (en) * | 2023-07-18 | 2023-08-18 | 贵州大学 | A Few-Sample Fine-grained Image Classification Method Based on Contrastive Learning |
CN116883157A (en) * | 2023-09-07 | 2023-10-13 | 南京大数据集团有限公司 | Small sample credit assessment method and system based on metric learning |
Non-Patent Citations (4)
Title |
---|
LEI SHI; BIN WANG; JUNXING ZHANG: "A Multi-stage Transfer Learning Framework for Diabetic Retinopathy Grading on Small Data", ICC 2023 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 23 October 2023 (2023-10-23) * |
张善文;张传雷;张云龙;: "基于二维局部敏感判别分析法的雷达目标识别", 电光与控制, no. 04, 1 April 2013 (2013-04-01) * |
李琼;柏正尧;刘莹芳;: "糖尿病性视网膜图像的深度学习分类方法", 中国图象图形学报, no. 10, 16 October 2018 (2018-10-16) * |
董煜,张友鹏: "基于聚类赋权的冲突证据组合方法", 计算机软件及计算机应用, 22 March 2023 (2023-03-22) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935030A (en) * | 2024-03-22 | 2024-04-26 | 广东工业大学 | Multi-label confidence calibration method and system based on dual-view correlation-aware regularization |
CN117935030B (en) * | 2024-03-22 | 2024-10-25 | 广东工业大学 | Multi-label confidence calibration method and system based on dual-view correlation-aware regularization |
CN118411573A (en) * | 2024-07-01 | 2024-07-30 | 苏州大学 | An automatic classification method and system for rare fundus diseases based on OCT images |
CN118411573B (en) * | 2024-07-01 | 2024-10-25 | 苏州大学 | OCT image-based rare fundus disease automatic classification method and system |
Also Published As
Publication number | Publication date |
---|---|
CN117557840B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022188489A1 (en) | Training method and apparatus for multi-mode multi-disease long-tail distribution ophthalmic disease classification model | |
CN117557840B (en) | Fundus lesion grading method based on small sample learning | |
CN109325942B (en) | Fundus image structure segmentation method based on full convolution neural network | |
CN107610087B (en) | An automatic segmentation method of tongue coating based on deep learning | |
CN109300121A (en) | Method and system for constructing a diagnostic model of cardiovascular disease and the diagnostic model | |
CN114549469A (en) | Deep neural network medical image diagnosis method based on confidence degree calibration | |
CN110276763B (en) | A Retinal Vessel Segmentation Map Generation Method Based on Credibility and Deep Learning | |
CN108734108B (en) | A crack tongue recognition method based on SSD network | |
CN113782184B (en) | A stroke-assisted assessment system based on pre-learning of facial key points and features | |
CN117995341A (en) | Image-based severe disease comparison and evaluation method and system | |
CN109886346A (en) | A Cardiac MRI Image Classification System | |
CN116934747B (en) | Fundus image segmentation model training method, equipment and glaucoma auxiliary diagnosis system | |
CN117010971B (en) | Intelligent health risk providing method and system based on portrait identification | |
Panda et al. | Glauconet: patch-based residual deep learning network for optic disc and cup segmentation towards glaucoma assessment | |
CN112712122A (en) | Corneal ulcer classification detection method and system based on neural network model | |
CN116843984A (en) | GLTransNet: a mammography image classification and detection method that integrates global features | |
CN117975170A (en) | Medical information processing method and system based on big data | |
CN119027983A (en) | Method, device and electronic equipment for identifying images of livestock and poultry diseases | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN118644460A (en) | A method for object detection in hysteroscopic images based on depth information and knowledge distillation | |
Ahmed et al. | An effective deep learning network for detecting and classifying glaucomatous eye. | |
CN109711306B (en) | Method and equipment for obtaining facial features based on deep convolutional neural network | |
CN117457203A (en) | Early stroke identification method based on multi-model fusion of patients' multi-dimensional information | |
CN117496217A (en) | Premature infant retina image detection method based on deep learning and knowledge distillation | |
CN117195027A (en) | Cluster weighted clustering integration method based on member selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |