[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117557840A - A method for grading fundus lesions based on small sample learning - Google Patents

A method for grading fundus lesions based on small sample learning Download PDF

Info

Publication number
CN117557840A
CN117557840A CN202311491052.9A CN202311491052A CN117557840A CN 117557840 A CN117557840 A CN 117557840A CN 202311491052 A CN202311491052 A CN 202311491052A CN 117557840 A CN117557840 A CN 117557840A
Authority
CN
China
Prior art keywords
network
fundus
sample
samples
meta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311491052.9A
Other languages
Chinese (zh)
Other versions
CN117557840B (en
Inventor
徐晓
李甦雁
吴亮
余颖
杨旭
牛亮
曹咏琦
李蕴龙
牛强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FIRST PEOPLE'S HOSPITAL OF XUZHOU
China University of Mining and Technology CUMT
Original Assignee
FIRST PEOPLE'S HOSPITAL OF XUZHOU
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FIRST PEOPLE'S HOSPITAL OF XUZHOU, China University of Mining and Technology CUMT filed Critical FIRST PEOPLE'S HOSPITAL OF XUZHOU
Priority to CN202311491052.9A priority Critical patent/CN117557840B/en
Publication of CN117557840A publication Critical patent/CN117557840A/en
Application granted granted Critical
Publication of CN117557840B publication Critical patent/CN117557840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fundus disease grading method based on small sample learning, which comprises the steps of firstly, collecting and preprocessing retina fundus color photographs to obtain the existing fundus disease data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of fundus lesions by the image data through a contrast network and a meta-network which are subjected to pre-training and meta-training; and finally, carrying out similarity scoring on the unlabeled image and each level fundus lesion prototype so as to carry out grading prediction of fundus lesions. According to the invention, fundus lesions can be classified better by self-learning fundus lesions and fundus illumination with a small number of samples through a dual-network structure comprising a comparison network and a meta-network after meta-training, and the fundus lesions can be classified on the basis of effectively reducing noise influence in both aspects of a feature space and a label space, so that the accuracy of fundus lesions prediction is improved.

Description

Fundus lesion grading method based on small sample learning
Technical Field
The invention relates to a fundus lesion grading method based on image data, in particular to a fundus lesion grading method based on small sample learning, and belongs to the technical field of medical image processing and computer vision.
Background
In recent years, deep learning techniques have been widely used in various fields such as computer vision and the like and have achieved remarkable effects. The high accuracy of this is highly dependent on large scale marking data, which is not always available in real life for large amounts of tagged data, e.g. in the medical field. Taking fundus disease prediction as an example, various diseases such as diabetic retinopathy, glaucoma, transformation of contralateral eyes to neovascular AMD within one year, cardiovascular diseases (ischemic stroke, myocardial infarction, heart failure) and neurodegenerative diseases (Parkinson's disease) and the like can be diagnosed and predicted by fundus illumination. Early prediction results may allow timely intervention, especially for patients with chronic diseases, such as diabetics, and early disease prediction may help prevent serious complications. However, the retinal fundus color photograph is difficult to obtain in various technologies and physiology, so that the problems of pupil size, patient cooperation, corneal opacity, refractive error and the like can lead to incapability of collecting pictures, and the quality of used equipment and lenses is important for obtaining high-quality fundus color photographs. Low quality equipment or lenses may cause distortion and distortion of images, while high quality fundus photographing equipment is generally expensive and has limited medical resources, so that tagged fundus lesion data used as a training sample is relatively less, a traditional deep learning method is difficult to use, and further, the acquired fundus color photographs are difficult to judge and screen diseases, and a great deal of manpower and time are required.
Small sample learning, which may imitate humans using few examples in tasks to identify new classes, is of increasing interest due to the high cost and effort of collecting large amounts of data. The purpose of the small sample learning is to perform quick learning by only a small number of marked data samples, so that the small sample learning has good generalization performance on new tasks. However, most of the existing small sample learning methods are based on the assumption that label information is completely clean and complete, and the robustness of a model facing a noise label is not considered, in fact, the noise label is ubiquitous due to limited knowledge and unintentional damage in medical images, so that the problem of low accuracy of a prediction result exists in fundus lesion prediction classification by utilizing the existing small sample learning methods. Taking grading of a sugar network (short for diabetic retinopathy) as an example, on one hand, grading standards are easy to be confused when grading samples with the same judging standard and different grades, for example, in light NPDR and medium NPDR, the judgment can be made by virtue of microangioma, but the difference is that the degree is light and heavy, so that the fundus color photograph angle is easy to identify errors for samples like transition from light NPDR to medium NPDR; on the other hand, because the fundus color photograph of the patient possibly contains unfamiliar or rare disease features of the expert, and reasons such as photographing angles, data transmission, damage and the like of the fundus color photograph exist, the expert often generates deviation in grading labeling, and further the noise data of labeling samples are generated, and the existence of noise often causes large model deviation, so that a prediction result is seriously influenced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a fundus lesion grading method based on small sample learning, which can realize fundus lesion grading on the basis of effectively reducing noise influence in both a characteristic space and a label space, thereby improving the accuracy of fundus lesion prediction.
In order to achieve the purpose, the fundus lesion grading method based on small sample learning specifically comprises the following steps:
step1, collecting retinal fundus color photographs from a data source, deriving pictures after the data acquisition is completed, and marking all the pictures by a professional doctor, and grading according to international clinical grading standards of fundus lesions;
step2, selecting data set after processing the eye bottom color photograph to improve image quality, reduce noise and normalize image, and selecting K samples for each level as support setQuery set->Wherein y is i ∈C novel K is the number of samples extracted for each fundus lesion level, M is the number of samples of query set Q;
at the same time, an auxiliary data set with rich samples and accurate labels is definedWherein y is i ∈C base Requirement C base ∩C novel Let phi, also divide the auxiliary data set into support sets S b And query set Q b The meta-training device is used for performing meta-training;
step3, constructing a comparison network for generating intra-class weights, and learning intra-class characteristics of each level of fundus lesions after pre-training and meta-training;
step4, constructing a meta-network for generating inter-class weights, and learning inter-class characteristics of each level of fundus lesions after pre-training and meta-training;
step5, correcting the eye fundus lesion prototype by using the sample weights extracted by the comparison network and the meta-network;
step6, carrying out similarity scoring on the unlabeled images and each grade fundus lesion prototype so as to carry out grading prediction of fundus lesions.
Further, step3 specifically comprises the following steps:
step3-1, pre-training contrast network g ξ Mapping the sample vector into a certain feature space through a feature extraction network;
step3-2, meta-training stage for a more widely sourced medical dataset D b K samples under each category and K samples classified by each fundus lesion in the query stage are calculated according to cosine similarity, and the similarity between every two samples is calculated according to the following calculation formula:
cor(x a ,x b )=cos(g ξ (x a ),g ξ (x b ))
wherein: g ξ Representing a pre-trained contrast network feature extractor, x a ,x b Respectively representing two support samples;
obtaining a K x K correlation matrix Z for a rank n n ∈R K×K Each correlation matrix Z n The method comprises the steps of including relevant information between a certain support sample and the rest K-1 samples of the same level, wherein the information is scattered on other K-1 relevant characteristics at the same time;
step3-3, correlation matrix Z n Direct input transducer layer:
wherein: phi (phi) T Z is a transducer layer parameter n Represents a correlation matrix, o n Representing an output result;
then using K sample softmax function to calculate the weight vector V in the class corresponding to each sample n
V n =ρ(o n )
A vector is generated for each sample that characterizes the weights within the class.
Further, step3-1 is specifically as follows:
step3-1-1, feature extractor pre-training: given feature extractor f θ (. Cndot.) prototype for each level k is calculated as
Wherein:the number of the support samples with the fundus lesion level k; x is x t Representing a support sample; y is t Representing sample x t A corresponding fundus lesion level label;
new sample x for a given set of queries q The classifier outputs a normalized classification score for each class k
Wherein: sim (sim) w (. Cndot.) is a similarity function; f (f) θ (. Cndot.) is a feature extractor, x q To query the sample, c k A prototype of class k;
sim w (. Cndot.) the following calculation was used
sim w (A,B)=λcos(F w (A),F w (B))
Wherein: f (F) w (.) is a w-parameterized single-layer neural network; lambda is the inverse temperature parameter;
the θ and ω are updated by the following loss function
Wherein: x is x q ,y q All derived from the query sample; p is p θ,ω Normalized scores for the corresponding categories; e represents a mathematical expectation;
step3-1-2, contrast web pretraining:
contrasting network g ξ Training with condition loss and contrast loss, giving input fundus color photograph x, generating two images x 'and x' processed by different data enhancement methods, sequentially placing the two enhanced images into contrast feature extraction network g ξ And a projected multi-layer perceptron head sigma for comparing the network g ξ The extracted original features are embedded for further projection and transformation, and finally, the mapped result after sigma is put into a predictive multi-layer perception machine head delta to obtain the sine and cosine similarity of two embedded vectors:
wherein: I.I 2 Represents the l2 norm; stop-gradient is stop gradient operation;
at the same time, the condition loss is utilized to guide the learning of the comparison network, and the comparison network is composed of a feature extractor f θ (. Cndot.) guide learning:
combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
Further, step4 specifically comprises the following steps:
step4-1, initializing a meta learning networkFor extracting support set features;
step4-2, firstly, for all support samples, carrying out K-Means clustering according to a class prototype of the support sample as an initial cluster center until convergence;
step4-3, calculating cosine similarity between the final clustering center and all samples to obtain a similarity matrix;
step4-4, obtaining a certain support sample x by using a softmax function t The inter-class weights for a certain fundus lesion classification are:
wherein:representing sample x t Similarity score for fundus lesion level n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Further, step5 specifically comprises the following steps:
step5-1, prototype correction in comparison network is:
wherein: α+β=1;is an inter-class weight; />Is an intra-class weight; g ξ Representing a comparison network feature extractor; x is x t Represents the t-th sample; />A kth sample representing a class n;
step5-2, prototype modification in the meta-network is:
wherein: α+β=1;is an inter-class weight; />Is an intra-class weight; />Representing a meta-network feature extractor; x is x t Represents sample t, ++>Represents the kth sample of class n.
Further, step6 specifically comprises the following steps:
step6-1, define the EC similarity score as follows:
wherein: x represents a query sample; c n A sample center point representing the fundus lesion level; g ξ Representing a comparison network feature extractor;representing a meta-network feature extractor; p is p n Representing the prototype corrected by the comparison network; p's' n Representing the prototype after the correction of the meta-network;
step6-2, calculating EC similarity scores corresponding to various levels of fundus lesions according to a certain query sample x, and obtaining probabilities corresponding to the fundus lesion levels by using a softmax function:
wherein: s is S EC Representing EC similarity scores;representing a query sample; c n A sample center point representing the fundus lesion level; t is a superparameter.
Compared with the prior art, the fundus lesion grading method based on small sample learning has the following advantages:
1. in the fundus disease prediction process, the acquisition difficulty of a data set is high, so that the number of samples trained by an artificial neural network is limited and is insufficient to train a traditional neural network.
2. Because of the problems of high difficulty in obtaining high-quality retina fundus color photographs, unknown diseases affecting the identification of fundus lesions, and the like, the grading labeling of the fundus lesions by an expert may deviate. The invention eliminates unreasonably marked noise images as far as possible by constructing a double-network structure and tries to classify the fundus lesions correctly. The contrast network and the meta network respectively correct the fundus lesion prototype of each level through the intra-class weight and the inter-class weight, so that the characteristic and the label representation capability of the model are improved. Metric level calibration can mitigate the effects of two network mispredictions, improving the stability and robustness of the model.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a block diagram of the data preprocessing of the present invention;
FIG. 3 is an overall framework of a dual network architecture in accordance with the present invention;
FIG. 4 is a schematic diagram of the generation of intra-class weights and inter-class weights in a dual network architecture of the present invention;
FIG. 5 is a schematic diagram of an original modification strategy in a dual network architecture of the present invention;
fig. 6 is a schematic diagram of the comparative network pre-training phase of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings, taking the grading of the lesions of the sugar network as an example.
As shown in fig. 1, firstly, collecting and preprocessing retina fundus color photographs, on one hand, enabling images to adapt to network input, and on the other hand, facilitating subsequent training processes to reduce calculated amount and speed up training, and obtaining an existing sugar network lesion data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of sugar network lesions by the image data through a pre-trained and meta-trained comparison network and a meta-network, and reducing the influence of noise fundus color illumination with labeling deviation; and finally, carrying out similarity scoring on the unlabeled images and each level of sugar net prototype so as to carry out grading prediction of the sugar net. The method comprises the following steps:
step1, data acquisition and labeling
Retinal fundus color illumination is collected from data sources such as diabetics and healthy subjects. The data collection may be performed using sensors, medical devices, or database queries, among other means. The pictures are derived after the data acquisition is completed, and marked by a professional doctor, and the pictures are classified into no obvious retinopathy (grade I), mild NPDR (grade II), moderate NPDR (grade III), severe NPDR (grade IV) and PDR (grade V) according to the new international clinical grading standard of diabetic retinopathy. The specific grading criteria and mydriatic fundus examination are shown in Table 1 below.
Table 1 diabetic retinopathy grading criteria
Step2, data preprocessing
The method is characterized in that data preprocessing is performed before the analysis of the eye fundus color illumination, so that the image quality is improved, noise is reduced, and the image is standardized, and the subsequent analysis is facilitated. As shown in fig. 2, the data preprocessing employs the following steps:
step2-1, image sharpening: by applying the image sharpening filter, edges and details of the image can be enhanced, improving the sharpness of the image. Common sharpening filters include Sobel, laplacian and Gao Sirui sharpening.
Step2-2, contrast enhancement: increasing the contrast of the image may make the lesions more visible. The contrast enhancement method includes histogram equalization and contrast stretching.
Step2-3, denoising: fundus illumination may be subject to various noise, such as light noise, artifacts, and the like. Denoising methods include median filtering, gaussian filtering, and wavelet denoising.
Step2-4, color normalization: the color and brightness of fundus images may be different depending on different photographing conditions, and thus color normalization is required to keep the color and brightness uniform between different images.
Step2-5, removing fundus reflection: the optic disc (central portion of the fundus) in fundus illumination typically introduces large changes in luminance, sometimes requiring removal or alleviation of the effects of this area in order to better analyze other portions of the retina.
Step2-6, image cropping: the image may be cropped to preserve only the region of interest (ROI) as needed for a particular task, thereby reducing the complexity of the process.
Step2-7, standardization of image scale: the images are adjusted to the same size for training and analysis of the deep learning model.
Step2-8, selecting a dataset: k samples are selected for each grade of the sugar net grading five-grade standardTo support a collection, i.eQuery set->Wherein y is i ∈C novel K is the number of samples extracted for each grade of sugar net lesion, and M is the number of samples of query set Q. Each FSL problem with the support set estimating sample categories in the query set can be seen as a task. At the same time, an auxiliary dataset is defined which has a rich sample and is labeled accurately +.>Wherein y is i ∈C base The data set may be a data set derived from other fundus medical tasks, requirement C base ∩C novel =Φ, the auxiliary dataset is also divided into a support set and a query set: s is S b And Q b They are used for meta-training. This strategy can be seen as being in base class C base Training the model on the data of (1) to make the model generalize well to C novel
Step3, constructing a comparison network
In order to cope with the interference of noise information on model training effect under small sample learning, a dual-network architecture is constructed, which consists of a comparison network and a meta-network, wherein the comparison network is used for generating weights in classes, and the meta-network is used for generating weights between classes. A specific architecture is shown in fig. 3.
Step3-1, pre-training contrast network g ξ (see Step7 for details) the sample vector is mapped into a feature space through the feature extraction network.
Step3-2, meta-training stage for a more widely sourced medical dataset D b K samples under each category and K samples classified by each sugar net in the query stage are calculated according to cosine similarity, and the similarity between every two samples is calculated according to the following calculation formula:
cor(x a ,x b )=cos(g ξ (x a ),g ξ (x b )).
wherein: g ξ Representing a pre-trained contrast network feature extractor, x a ,x b Respectively representing two support samples;
thus, a K X K correlation matrix Z can be obtained for the rank n n ∈R K×K For example Z 1 The overall correlation between the sample with the sugar network classified as class I and the class I sugar network is disclosed. Modeling can be performed with potential correlation between correlated samples of the same level. Each correlation matrix Z n The method comprises the steps of including the related information between a certain supporting sample and the rest K-1 samples of the same level, and dispersing the information on other K-1 related characteristics. The nature of the interconnect makes it more reasonable to fully consider the context of the relevant features of the same class when generating the intra-class weights.
Step3-3, the self-attention mechanism in the transducer model utilizes support samples to assign weights based on the similarity between them. Specifically, a correlation matrix among samples is obtained after the Step3-2, and a correlation matrix Z is obtained n Direct input transducer layer (without position coding):
wherein: phi (phi) T Z is a transducer layer parameter n Represents a correlation matrix, o n Representing an output result;
then using K sample softmax function to calculate the weight vector V in the class corresponding to each sample n
V n =ρ(o n )
After steps 3-1, step3-2, step3-3, a vector representing the weights within the class can be generated for each sample. The overall process of specifically generating such internal weights is shown in fig. 4.
Step4, constructing a meta-network
Step4-1, initializing a meta learning networkFor extracting support set features.
Step4-2, firstly, for all the support samples, carrying out K-Means clustering by taking a class prototype of the support sample as an initial cluster center according to the fact that the cluster is five, until convergence. Class prototypes are representative representations generated based on support samples, and the center or average value of each class may be taken as the class prototype in the present invention.
Step4-3, calculating the cosine similarity between the final clustering center and all samples, and consistent with Step 4-2. Five clustering centers can be obtained through Step4-1, cosine similarity between each clustering center and all samples is calculated, and S epsilon R can be finally obtained 25×K Is a similar matrix of (a) for the image.
Step4-4, obtaining a certain support sample x by using a softmax function t (t=1, 2,3., 25K), which is the inter-class weight for a certain sugar net classification:
wherein:representing sample x t Similarity score for sugar network class n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Step5, prototype modification
A frame diagram of the prototype modification strategy is shown in fig. 5.
Step5-1, prototype correction in comparison network is:
wherein: α+β=1;is composed ofstep3-3, obtaining an inter-class weight; />Is an intra-class weight obtained from step 4-4; g ξ Representing a comparison network feature extractor; x is x t Represents the t-th sample; />Represents the kth sample of class n.
Step5-2, prototype modification in the meta-network is:
wherein: α+β=1;the inter-class weight is obtained from step 3-3; />Is an intra-class weight obtained from step 4-4; />Representing a meta-network feature extractor; x is x t Represents sample t, ++>Represents the kth sample of class n.
Step6, similarity score
To mitigate the effects of two network mispredictions, we introduced metric intelligent calibration on both networks, using a method called EC similarity to help calculate the predicted scores for each level of sugar network. As shown in fig. 6, the calculation of a specific EC similarity score includes the following steps:
step6-1, define the EC similarity score as follows:
wherein: x represents a query sample; c n A sample center point representing the sugar network level; g ξ Representing a comparison network feature extractor;representing a meta-network feature extractor; p is p n Representing the prototype corrected by the comparison network; p's' n Representing the prototype after the meta-network modification.
That is, for each level of the sugar network disorder, the incoming query sample x (i.e., retinal fundus color illumination) passes through the comparison network g ξ And meta-networkAfter mapping of (a), the corresponding EC score can be calculated.
Step6-2, calculating EC similarity scores corresponding to the I/II/III/IV/V grade sugar net lesions according to a certain query sample x, and obtaining probabilities corresponding to the sugar net lesion grades by using a softmax function:
wherein: s is S EC Representing EC similarity scores;representing a query sample; c n A sample center point representing the fundus lesion level; t is a superparameter.
Step7, model Pre-training
Step7-1, feature extractor pre-training: a prototype for each class may be computed by a feature extractor. Given feature extractor f θ (. Cndot.) prototypes for each level k can be calculated as
Wherein:the number of the support samples with the sugar net level of k; f (f) θ Implemented by Conv-4-64 backbone network, x t Representing a support sample; y is t Representing sample x t Corresponding labels, i.e. samples x t Is a sugar network grade of (2);
new sample x for a given set of queries q The classifier outputs a normalized classification score for each class k
Wherein: sim (sim) w (. Cndot.) is a similarity function; f (f) θ (. Cndot.) is a feature extractor, x q To query the sample, c k Is a prototype of class k.
sim w (. Cndot.) the following calculation was used
sim w (A,B)=λcos(F w (A),F w (B))
Wherein: f (F) w (.) is a w-parameterized single-layer neural network, the output dimension is 2048, and λ is the inverse temperature parameter.
The θ and ω are updated by the following loss function
Wherein: x is x q ,y q All derived from the query sample, p θ,ω For the normalized score of the corresponding category, the calculation method is as above; e represents a mathematical expectation.
Step7-2, comparative web pre-training:
contrasting network g ξ Training can be performed with conditional and comparative losses. Given an input fundus color photograph x, two images x 'and x' processed by different data enhancement methods are generated, and the two enhanced images are sequentially put into a contrast feature extraction network g ξ And a projected multi-layer perceptron header sigma for extracting the extracted features from the feature extractor (i.e. the contrast feature extraction network g ξ ) The extracted original features are embedded for further projection and transformation. Finally, the result mapped after sigma is put into a predictive multi-layer perception machine head delta, and then the sine and cosine similarity of two embedded vectors can be obtained:
wherein: I.I 2 Represents the l2 norm; stop-gradient is a stop gradient operation that is commonly used in the construction of certain loss functions, where certain parameters should not be updated according to the gradient, but should remain unchanged.
At the same time, the condition loss is utilized to guide the learning of the comparison network, and the comparison network is composed of a feature extractor f θ (. Cndot.) (see Step 7-1) guide learning:
combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
f θ Realized by Conv-4-64 backbone network, the projection multi-layer perception machine head sigma is formed by a single-layer neural network F w (-) single-layer neural network and one multi-layer neural network (each layer of neural network has {1600,2048,2048} units and is subjected to batch normalization). The predictive multi-layer perceptron head delta is parameterized by a three-layer neural network having 512 hidden units and batch normalized at the hidden layers, all using the ReLU function as the activation function.
Step8, model element training
The training set used in the meta-training phase is derived from the broader medical data set D b It requires no sample of the used set of sugar network lesions, i.e. C base ∩C novel =φ,D b The number of categories included is N. Step1 to Step7 show how the model recognizes the grading of the sugar network (i.e. in case of n=5), for a medical dataset D of more extensive origin b The steps are consistent.
Step8-1, define meta-loss:
wherein: m is the total number of samples of the query set,
in dataset D b The artificial noise is defined by defining data set D b The noise loss in the above class is as follows:
wherein whenFor artificially induced noise +.>Otherwise, 0.
For all support set samples, constructing a similarity matrix M, calculating the similarity between any two samples, and defining the inter-class loss as:
wherein,
l(x (i) ) Represents x (i) Is a true value for the tag.
The final loss function is defined as:
L total =L me +ηL ra +γL er
wherein: η and γ are normal numbers representing the importance of the different losses.
Step9, diabetes stratification
The existing marked sugar net graded picture (support set S n Derived from C novel ) Put into the network, the new sugar network dataset will learn itself for the network model that has been trained in step 8.
Finally, when grading prediction is carried out on single fundus color photographs, the EC similarity is used for measuring the score of each grade, and the grade with the highest score is obtained as a final prediction label:
the probability of each level of sugar net symptom can be calculated using the following formula:
the label distribution of the symptoms of the sugar network of each grade is calculated.
The method can be implemented by using a hardware platform such as a computer, a server, a mobile device and the like, wherein data processing and model training are involved. The method may also incorporate real-time monitoring equipment and a patient database to continuously track and update the grading results. Contrast network in denoising networkAnd element networkCollaterals g ξ All can be implemented by ConvNet model (C64E) backbone. In C64E, each block consists of 64-channel 3×3 convolution, batch normalization, reLU nonlinearity, and 2×2 max pooling. The feature embedding dimension is set to 1600.
The fundus lesion grading method based on small sample learning removes the data with marking deviation in fundus color photographs through a double-network structure formed by a comparison network and a meta-network, and performs fundus lesion grading prediction on fundus color photographs in query concentration on the basis. The dual network architecture consisting of the comparison network and the meta network is calibrated in two ways to eliminate data noise: example level calibration and metric level calibration. In the aspect of example-level calibration, the prototype is corrected by utilizing the sample weights extracted by two networks, so that the relation between sugar net samples of each level and the whole sugar net samples of different levels can be better captured, the distinguishing capability of the model on the samples of different levels is improved, the corrected prototype can more accurately represent the characteristic and label information of each sugar net level, the performance of the model in the classification learning task of the sugar net with fewer samples is improved, and the characteristic representation capability and label representation capability of the model are improved. In terms of metric level calibration, ensemble with Consistency (EC) principle is introduced, similarity between two examples is calculated by fusing similarity evaluation results in two different networks, EC similarity can adjust confidence of similarity prediction according to consistency of the similarity in the two networks and scale similarity prediction scores, similarity between two sugar net samples can be evaluated more accurately by using EC similarity, specifically, the similarity score of each sugar net level can be scaled implicitly, the prediction scores can be calibrated adaptively by calculating consistency of the similarity scores of the two network predictions, and the metric level calibration can alleviate influence of two network mispredictions and improve stability and robustness of a model.
The artificial neural network comprises a pre-training module, adopts a self-supervision learning method, utilizes supervision information in marked data, shapes and improves self-supervision learning feature manifolds under the condition of no auxiliary unmarked data, reduces characterization deviation, and mines more effective semantic information. When the comparison network is pre-trained, the feature extractor is firstly learned on the labeling data through the original supervised learning method, and the prototype of each category is calculated, namely, the prototype of each sugar network level is calculated. Next, a conditional self-monitoring model is trained using the self-monitoring module and the condition module. The self-supervision module generates two different enhancement views by using a random enhancement method, and calculates the similarity loss between the embedded vectors, namely the similarity loss between the sugar net lesion images of different visual angles of the same patient. The condition module uses the features learned in the pre-training stage as priori knowledge, namely uses the prototype representation of each learned sugar net level as guidance, optimizes the feature manifold learned by the self-supervision module, and enables the model to extract more semantic information in combination with the multiparty view angle in the comparison network so as to obtain better representation.

Claims (6)

1.一种基于小样本学习的眼底病变分级方法,其特征在于,具体包括以下步骤:1. A fundus lesion grading method based on small sample learning, which is characterized by including the following steps: Step1,从数据源收集视网膜眼底彩照,数据采集完成后导出图片,并交由专业医师对所有图片进行标注,按照眼底病变国际临床分级标准进行分级;Step 1. Collect retinal fundus color photos from the data source. After the data collection is completed, export the pictures and submit them to professional doctors to label all pictures and classify them according to the international clinical grading standards for fundus lesions; Step2,在对眼底彩照进行提高图像质量、减少噪声、标准化图像的处理后选择数据集,对每个级别选取K个样本作为支持集查询集/>其中yi∈Cnovel,K是每种眼底病变级别提取的样本数,M是查询集Q的样本数;Step 2. After processing the fundus color photos to improve image quality, reduce noise, and standardize the images, select the data set, and select K samples for each level as the support set. Queryset/> Among them, y i ∈C novel , K is the number of samples extracted for each fundus lesion level, and M is the number of samples in the query set Q; 同时,定义一个具有丰富样本、且准确标注的辅助数据集其中yi∈Cbase,要求Cbase∩Cnovel=φ,同样将辅助数据集划分为支持集Sb和查询集Qb,用于进行元训练;At the same time, define an auxiliary data set with rich samples and accurate annotation. Among them, y i ∈C base requires C base ∩C novel =φ. The auxiliary data set is also divided into a support set S b and a query set Q b for meta-training; Step3,构建用于生成类内权重的对比网络,进行预训练和元训练后,学习各级别眼底病变的类内特征;Step 3: Construct a comparison network for generating intra-class weights, and after pre-training and meta-training, learn the intra-class features of fundus lesions at each level; Step4,构建用于生成类间权重的元网络,进行预训练和元训练后,学习各级别眼底病变的类间特征;Step 4: Construct a meta-network for generating inter-class weights, and after pre-training and meta-training, learn the inter-class features of fundus lesions at each level; Step5,利用对比网络和元网络提取的样本权重对眼底病变原型进行修正;Step 5: Use the sample weights extracted by the comparison network and meta-network to correct the fundus lesion prototype; Step6,对未标记的影像与各级别眼底病变原型进行相似性评分以进行眼底病变的分级预测。Step 6: Score the similarity between the unlabeled images and the prototypes of fundus lesions at each level to predict the grade of fundus lesions. 2.根据权利要求1所述的基于小样本学习的眼底病变分级方法,其特征在于,Step3具体过程如下:2. The fundus lesion grading method based on small sample learning according to claim 1, characterized in that the specific process of Step 3 is as follows: Step3-1,预训练对比网络gξ,将样本向量通过特征提取网络映射到某个特征空间当中;Step3-1, pre-train the comparison network , and map the sample vector to a certain feature space through the feature extraction network; Step3-2,元训练阶段对于来源更广泛的医疗数据集Db所对应每个类别下的K个样本以及查询阶段对于每种眼底病变分级的K个样本,均按照余弦相似度,计算两两样本之间的相似度,计算公式如下:Step 3-2. In the meta-training stage, for the K samples corresponding to each category of the medical data set D b from a wider range of sources and the query stage for the K samples of each fundus lesion classification, each pair is calculated according to the cosine similarity. The similarity between samples is calculated as follows: cor(xa,xb)=cos(gξ(xa),gξ(xb))cor(x a ,x b )=cos(g ξ (x a ),g ξ (x b )) 式中:gξ表示经过预训练后的对比网络特征提取器,xa,xb分别表示两个支持样本;In the formula: g ξ represents the pre-trained comparison network feature extractor, x a and x b respectively represent two support samples; 得到对于分级n的K×K相关性矩阵Zn∈RK×K,每个相关性矩阵Zn包含某一支持样本与同级其余K-1个样本之间的相关信息,这些信息同时分散在其他K-1相关特征上;Obtain K×K correlation matrix Z n ∈R K×K for level n. Each correlation matrix Z n contains relevant information between a certain support sample and the remaining K-1 samples of the same level. This information is dispersed at the same time. On other K-1 related characteristics; Step3-3,将相关矩阵Zn直接输入Transformer层:Step3-3, directly input the correlation matrix Z n into the Transformer layer: 式中:φT为Transformer层参数,Zn表示相关性矩阵,on表示输出结果;In the formula: φ T is the Transformer layer parameter, Z n represents the correlation matrix, and on represents the output result; 然后使用K样本softmax函数,计算出每一样本对应的类内权重向量VnThen use the K sample softmax function to calculate the intra-class weight vector V n corresponding to each sample: Vn=ρ(on)V n =ρ( on ) 对于每个样本均生成一个表征类内权重的向量。For each sample, a vector representing the within-class weights is generated. 3.根据权利要求2所述的基于小样本学习的眼底病变分级方法,其特征在于,Step3-1具体过程如下:3. The fundus lesion grading method based on small sample learning according to claim 2, characterized in that the specific process of Step3-1 is as follows: Step3-1-1,特征提取器预训练:给定特征提取器fθ(·),每个级别k的原型计算为Step3-1-1, feature extractor pre-training: given the feature extractor f θ (·), the prototype of each level k is calculated as 式中:为眼底病变级别为k的支持样本个数;xt表示支持样本;yt表示样本xt对应的眼底病变级别标签;In the formula: is the number of support samples with fundus lesion level k; x t represents the support sample; y t represents the fundus lesion level label corresponding to sample x t ; 给定查询集的新样本xq,分类器输出每个类k的归一化分类分数Given a new sample x q of the query set, the classifier outputs a normalized classification score for each class k 式中:simw(·)是相似度函数;fθ(·)为特征提取器,xq为查询样本,ck为级别k的支原型;In the formula: sim w (·) is the similarity function; f θ (·) is the feature extractor, x q is the query sample, c k is the support prototype of level k; simw(·)使用下列计算方式sim w (·) uses the following calculation method simw(A,B)=λcos(Fw(A),Fw(B))sim w (A, B) = λ cos (F w (A), F w (B)) 式中:Fw(.)为w参数化的单层神经网络;λ为逆温度参数;In the formula: F w (.) is a single-layer neural network parameterized by w; λ is the inverse temperature parameter; 再通过以下损失函数更新θ和ωThen update θ and ω through the following loss function 式中:xq,yq均来源于查询样本;pθ,ω为对应类别的归一化分数;E表示数学期望;In the formula: x q , y q are all derived from query samples; p θ and ω are the normalized scores of the corresponding categories; E represents mathematical expectation; Step3-1-2,对比网络预训练:Step3-1-2, compare network pre-training: 对比网络gξ用条件损失和对比损失进行训练,给定输入的眼底彩照x,生成两个经过不同的数据增强方法处理的图像x'和x”,这两张增强后的图像先后放入对比特征提取网络gξ和一个投影多层感知机头部σ,σ用于将从对比网络gξ提取的原始特征嵌入进行进一步的投影和变换,最后再将经过σ后映射的结果放入一个预测多层感知机头部δ,得到两个嵌入向量的负余弦相似度:The contrast network g ξ is trained with conditional loss and contrast loss. Given the input fundus color photo x, two images x' and x" processed by different data enhancement methods are generated. These two enhanced images are put into comparison successively. The feature extraction network g ξ and a projected multi-layer perceptron head σ, σ is used to further project and transform the original feature embedding extracted from the comparison network g ξ , and finally the mapped result after σ is put into a prediction The multi-layer perceptron head δ obtains the negative cosine similarity of the two embedding vectors: 式中:||·||2表示l2范数;stop-gradient为停止梯度操作;In the formula: ||·|| 2 represents the l2 norm; stop-gradient is the stop gradient operation; 同时,利用条件损失来指导对比网络的学习,对比网络由特征提取器fθ(·)引导学习:At the same time, conditional loss is used to guide the learning of the contrast network, which is guided by the feature extractor f θ (·): 将对比损失与条件损失相结合,得到对比网络优化的目标函数为:Combining contrastive loss with conditional loss, the objective function of contrastive network optimization is obtained: 式中:γ是一个正常数,权衡对比损失和条件损失的重要性。In the formula: γ is a positive constant, weighing the importance of contrast loss and conditional loss. 4.根据权利要求1所述的基于小样本学习的眼底病变分级方法,其特征在于,Step4具体过程如下:4. The fundus lesion grading method based on small sample learning according to claim 1, characterized in that the specific process of Step 4 is as follows: Step4-1,初始化元学习网络用于提取支持集特征;Step4-1, initialize meta-learning network Used to extract support set features; Step4-2,首先对于所有支持样本,按照支持样本的类原型作为初始簇中心进行K-Means聚类,直至收敛;Step 4-2: First, for all support samples, perform K-Means clustering according to the class prototype of the support sample as the initial cluster center until convergence; Step4-3,计算最终的聚类中心与所有样本计算余弦相似度,得到相似矩阵;Step 4-3: Calculate the final cluster center and calculate cosine similarity with all samples to obtain a similarity matrix; Step4-4,运用softmax函数,得到某个支持样本xt对于某一眼底病变分级的类间权重为:Step4-4, use the softmax function to obtain the inter-class weight of a certain support sample x t for a certain fundus lesion classification: 式中:表示样本xt对眼底病变级别n的相似度分数;τ为超参数用于防止梯度消失;K表示每类别对应的样本数。In the formula: Represents the similarity score of sample x t to fundus lesion level n; τ is a hyperparameter used to prevent gradient disappearance; K represents the number of samples corresponding to each category. 5.根据权利要求1所述的基于小样本学习的眼底病变分级方法,其特征在于,Step5具体过程如下:5. The fundus lesion grading method based on small sample learning according to claim 1, characterized in that the specific process of Step 5 is as follows: Step5-1,对比网络中原型修正为:Step5-1, compare the prototype in the network and correct it to: 式中:α+β=1;是类间权值;/>是类内权值;gξ表示对比网络特征提取器;xt表示第t个样本;/>表示类别为n的第k个样本;In the formula: α+β=1; Is the weight between classes;/> is the intra-class weight; represents the comparison network feature extractor; x t represents the t-th sample;/> Represents the k-th sample of category n; Step5-2,元网络中原型修正为:Step5-2, the prototype in the meta network is revised to: 式中:α+β=1;是类间权值;/>是类内权值;/>表示元网络特征提取器;xt表示第t个样本,/>表示类别为n的第k个样本。In the formula: α+β=1; Is the weight between classes;/> Is the weight within the class;/> represents the meta-network feature extractor; x t represents the t-th sample,/> Represents the k-th sample of category n. 6.根据权利要求1所述的基于小样本学习的眼底病变分级方法,其特征在于,Step6具体过程如下:6. The fundus lesion grading method based on small sample learning according to claim 1, characterized in that the specific process of Step 6 is as follows: Step6-1,定义EC相似度评分如下:Step6-1, define the EC similarity score as follows: 式中:x代表查询样本;cn表示该眼底病变级别的样本中心点;gξ表示对比网络特征提取器;表示元网络特征提取器;pn表示对比网络修正后的原型;p'n表示元网络修正后的原型;In the formula: x represents the query sample; c n represents the sample center point of the fundus lesion level; g ξ represents the contrast network feature extractor; represents the meta-network feature extractor; p n represents the modified prototype of the comparison network; p' n represents the modified prototype of the meta-network; Step6-2,针对某个查询样本x,计算出眼底病变各个级别所对应的EC相似度评分后,使用softmax函数得出对应眼底病变级别的概率:Step6-2: For a certain query sample x, after calculating the EC similarity score corresponding to each level of fundus lesions, use the softmax function to obtain the probability of the corresponding fundus lesion level: 式中:SEC表示EC相似度评分;表示查询样本;cn表示该眼底病变级别的样本中心点;T为超参数。In the formula: S EC represents the EC similarity score; represents the query sample; c n represents the sample center point of the fundus lesion level; T is the hyperparameter.
CN202311491052.9A 2023-11-10 2023-11-10 Fundus lesion grading method based on small sample learning Active CN117557840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311491052.9A CN117557840B (en) 2023-11-10 2023-11-10 Fundus lesion grading method based on small sample learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311491052.9A CN117557840B (en) 2023-11-10 2023-11-10 Fundus lesion grading method based on small sample learning

Publications (2)

Publication Number Publication Date
CN117557840A true CN117557840A (en) 2024-02-13
CN117557840B CN117557840B (en) 2024-05-24

Family

ID=89817778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311491052.9A Active CN117557840B (en) 2023-11-10 2023-11-10 Fundus lesion grading method based on small sample learning

Country Status (1)

Country Link
CN (1) CN117557840B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935030A (en) * 2024-03-22 2024-04-26 广东工业大学 Multi-label confidence calibration method and system based on dual-view correlation-aware regularization
CN118411573A (en) * 2024-07-01 2024-07-30 苏州大学 An automatic classification method and system for rare fundus diseases based on OCT images

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615051A (en) * 2018-04-13 2018-10-02 博众精工科技股份有限公司 Diabetic retina image classification method based on deep learning and system
CN110969191A (en) * 2019-11-07 2020-04-07 吉林大学 Prediction method of glaucoma prevalence probability based on similarity-preserving metric learning method
CN111639679A (en) * 2020-05-09 2020-09-08 西北工业大学 Small sample learning method based on multi-scale metric learning
CN111858991A (en) * 2020-08-06 2020-10-30 南京大学 A Few-Sample Learning Algorithm Based on Covariance Metrics
AU2020103938A4 (en) * 2020-12-07 2021-02-11 Capital Medical University A classification method of diabetic retinopathy grade based on deep learning
CN113361612A (en) * 2021-06-11 2021-09-07 浙江工业大学 Magnetocardiogram classification method based on deep learning
CN113537305A (en) * 2021-06-29 2021-10-22 复旦大学 Image classification method based on matching network less-sample learning
EP3944185A1 (en) * 2020-07-23 2022-01-26 INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images
CN114022766A (en) * 2021-11-04 2022-02-08 江苏农林职业技术学院 Tea typical disease image recognition system and method based on small sample learning
CN114283355A (en) * 2021-12-06 2022-04-05 重庆邮电大学 A Multi-target Endangered Animal Tracking Method Based on Few-Sample Learning
CN114494195A (en) * 2022-01-26 2022-05-13 南通大学 Few-Shot Attention Mechanism Parallel Siamese Approach for Fundus Image Classification
CN114898158A (en) * 2022-05-24 2022-08-12 杭州电子科技大学 Small-sample traffic anomaly image acquisition method and system based on multi-scale attention coupling mechanism
CN115019089A (en) * 2022-05-30 2022-09-06 中科苏州智能计算技术研究院 A Two-Stream Convolutional Neural Network for Few-Sample Learning
CN115170868A (en) * 2022-06-17 2022-10-11 湖南大学 Clustering-based small sample image classification two-stage meta-learning method
CN115359294A (en) * 2022-08-23 2022-11-18 上海交通大学 Cross-granularity small sample learning method based on similarity regularization intra-class mining
CN115458174A (en) * 2022-09-20 2022-12-09 吉林大学 A method for constructing an intelligent diagnosis model of diabetic retinopathy
CN115731411A (en) * 2022-10-27 2023-03-03 西北工业大学 A Few-Sample Image Classification Method Based on Prototype Generation
CN115910385A (en) * 2022-11-28 2023-04-04 中科院成都信息技术股份有限公司 Pathological degree prediction method, system, medium, equipment and terminal
WO2023056681A1 (en) * 2021-10-09 2023-04-13 北京鹰瞳科技发展股份有限公司 Method for training multi-disease referral system, multi-disease referral system and method
CN116503668A (en) * 2023-05-18 2023-07-28 西安交通大学 Medical image classification method based on small sample element learning
CN116525075A (en) * 2023-04-27 2023-08-01 四川师范大学 Method and system for computer-aided diagnosis of thyroid nodules based on few-sample learning
CN116529762A (en) * 2020-10-23 2023-08-01 基因泰克公司 Multimodal map atrophic lesion segmentation
CN116612335A (en) * 2023-07-18 2023-08-18 贵州大学 A Few-Sample Fine-grained Image Classification Method Based on Contrastive Learning
CN116824212A (en) * 2023-05-11 2023-09-29 杭州聚秀科技有限公司 Fundus photo classification method based on small sample learning
CN116883157A (en) * 2023-09-07 2023-10-13 南京大数据集团有限公司 Small sample credit assessment method and system based on metric learning
US20230342935A1 (en) * 2020-10-23 2023-10-26 Genentech, Inc. Multimodal geographic atrophy lesion segmentation

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615051A (en) * 2018-04-13 2018-10-02 博众精工科技股份有限公司 Diabetic retina image classification method based on deep learning and system
US20200234445A1 (en) * 2018-04-13 2020-07-23 Bozhon Precision Industry Technology Co., Ltd. Method and system for classifying diabetic retina images based on deep learning
CN110969191A (en) * 2019-11-07 2020-04-07 吉林大学 Prediction method of glaucoma prevalence probability based on similarity-preserving metric learning method
CN111639679A (en) * 2020-05-09 2020-09-08 西北工业大学 Small sample learning method based on multi-scale metric learning
EP3944185A1 (en) * 2020-07-23 2022-01-26 INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images
CN111858991A (en) * 2020-08-06 2020-10-30 南京大学 A Few-Sample Learning Algorithm Based on Covariance Metrics
CN116529762A (en) * 2020-10-23 2023-08-01 基因泰克公司 Multimodal map atrophic lesion segmentation
US20230342935A1 (en) * 2020-10-23 2023-10-26 Genentech, Inc. Multimodal geographic atrophy lesion segmentation
AU2020103938A4 (en) * 2020-12-07 2021-02-11 Capital Medical University A classification method of diabetic retinopathy grade based on deep learning
CN113361612A (en) * 2021-06-11 2021-09-07 浙江工业大学 Magnetocardiogram classification method based on deep learning
CN113537305A (en) * 2021-06-29 2021-10-22 复旦大学 Image classification method based on matching network less-sample learning
WO2023056681A1 (en) * 2021-10-09 2023-04-13 北京鹰瞳科技发展股份有限公司 Method for training multi-disease referral system, multi-disease referral system and method
CN114022766A (en) * 2021-11-04 2022-02-08 江苏农林职业技术学院 Tea typical disease image recognition system and method based on small sample learning
CN114283355A (en) * 2021-12-06 2022-04-05 重庆邮电大学 A Multi-target Endangered Animal Tracking Method Based on Few-Sample Learning
CN114494195A (en) * 2022-01-26 2022-05-13 南通大学 Few-Shot Attention Mechanism Parallel Siamese Approach for Fundus Image Classification
CN114898158A (en) * 2022-05-24 2022-08-12 杭州电子科技大学 Small-sample traffic anomaly image acquisition method and system based on multi-scale attention coupling mechanism
CN115019089A (en) * 2022-05-30 2022-09-06 中科苏州智能计算技术研究院 A Two-Stream Convolutional Neural Network for Few-Sample Learning
CN115170868A (en) * 2022-06-17 2022-10-11 湖南大学 Clustering-based small sample image classification two-stage meta-learning method
CN115359294A (en) * 2022-08-23 2022-11-18 上海交通大学 Cross-granularity small sample learning method based on similarity regularization intra-class mining
CN115458174A (en) * 2022-09-20 2022-12-09 吉林大学 A method for constructing an intelligent diagnosis model of diabetic retinopathy
CN115731411A (en) * 2022-10-27 2023-03-03 西北工业大学 A Few-Sample Image Classification Method Based on Prototype Generation
CN115910385A (en) * 2022-11-28 2023-04-04 中科院成都信息技术股份有限公司 Pathological degree prediction method, system, medium, equipment and terminal
CN116525075A (en) * 2023-04-27 2023-08-01 四川师范大学 Method and system for computer-aided diagnosis of thyroid nodules based on few-sample learning
CN116824212A (en) * 2023-05-11 2023-09-29 杭州聚秀科技有限公司 Fundus photo classification method based on small sample learning
CN116503668A (en) * 2023-05-18 2023-07-28 西安交通大学 Medical image classification method based on small sample element learning
CN116612335A (en) * 2023-07-18 2023-08-18 贵州大学 A Few-Sample Fine-grained Image Classification Method Based on Contrastive Learning
CN116883157A (en) * 2023-09-07 2023-10-13 南京大数据集团有限公司 Small sample credit assessment method and system based on metric learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LEI SHI; BIN WANG; JUNXING ZHANG: "A Multi-stage Transfer Learning Framework for Diabetic Retinopathy Grading on Small Data", ICC 2023 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 23 October 2023 (2023-10-23) *
张善文;张传雷;张云龙;: "基于二维局部敏感判别分析法的雷达目标识别", 电光与控制, no. 04, 1 April 2013 (2013-04-01) *
李琼;柏正尧;刘莹芳;: "糖尿病性视网膜图像的深度学习分类方法", 中国图象图形学报, no. 10, 16 October 2018 (2018-10-16) *
董煜,张友鹏: "基于聚类赋权的冲突证据组合方法", 计算机软件及计算机应用, 22 March 2023 (2023-03-22) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935030A (en) * 2024-03-22 2024-04-26 广东工业大学 Multi-label confidence calibration method and system based on dual-view correlation-aware regularization
CN117935030B (en) * 2024-03-22 2024-10-25 广东工业大学 Multi-label confidence calibration method and system based on dual-view correlation-aware regularization
CN118411573A (en) * 2024-07-01 2024-07-30 苏州大学 An automatic classification method and system for rare fundus diseases based on OCT images
CN118411573B (en) * 2024-07-01 2024-10-25 苏州大学 OCT image-based rare fundus disease automatic classification method and system

Also Published As

Publication number Publication date
CN117557840B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
WO2022188489A1 (en) Training method and apparatus for multi-mode multi-disease long-tail distribution ophthalmic disease classification model
CN117557840B (en) Fundus lesion grading method based on small sample learning
CN109325942B (en) Fundus image structure segmentation method based on full convolution neural network
CN107610087B (en) An automatic segmentation method of tongue coating based on deep learning
CN109300121A (en) Method and system for constructing a diagnostic model of cardiovascular disease and the diagnostic model
CN114549469A (en) Deep neural network medical image diagnosis method based on confidence degree calibration
CN110276763B (en) A Retinal Vessel Segmentation Map Generation Method Based on Credibility and Deep Learning
CN108734108B (en) A crack tongue recognition method based on SSD network
CN113782184B (en) A stroke-assisted assessment system based on pre-learning of facial key points and features
CN117995341A (en) Image-based severe disease comparison and evaluation method and system
CN109886346A (en) A Cardiac MRI Image Classification System
CN116934747B (en) Fundus image segmentation model training method, equipment and glaucoma auxiliary diagnosis system
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
Panda et al. Glauconet: patch-based residual deep learning network for optic disc and cup segmentation towards glaucoma assessment
CN112712122A (en) Corneal ulcer classification detection method and system based on neural network model
CN116843984A (en) GLTransNet: a mammography image classification and detection method that integrates global features
CN117975170A (en) Medical information processing method and system based on big data
CN119027983A (en) Method, device and electronic equipment for identifying images of livestock and poultry diseases
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN118644460A (en) A method for object detection in hysteroscopic images based on depth information and knowledge distillation
Ahmed et al. An effective deep learning network for detecting and classifying glaucomatous eye.
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN117457203A (en) Early stroke identification method based on multi-model fusion of patients' multi-dimensional information
CN117496217A (en) Premature infant retina image detection method based on deep learning and knowledge distillation
CN117195027A (en) Cluster weighted clustering integration method based on member selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant