[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111047590A - Hypertension classification method and device based on fundus images - Google Patents

Hypertension classification method and device based on fundus images Download PDF

Info

Publication number
CN111047590A
CN111047590A CN201911413567.0A CN201911413567A CN111047590A CN 111047590 A CN111047590 A CN 111047590A CN 201911413567 A CN201911413567 A CN 201911413567A CN 111047590 A CN111047590 A CN 111047590A
Authority
CN
China
Prior art keywords
hypertension
classification
related information
machine learning
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911413567.0A
Other languages
Chinese (zh)
Inventor
熊健皓
王斌
赵昕
陈羽中
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN201911413567.0A priority Critical patent/CN111047590A/en
Publication of CN111047590A publication Critical patent/CN111047590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a hypertension classification method and equipment based on fundus images, wherein the method comprises the steps of obtaining the fundus images of a user; and identifying the fundus image by using a machine learning model, and at least outputting a hypertension classification result. The machine learning model of the invention introduces various hypertension-related information in the training process to optimize the classification performance, the used sample data comprises fundus images, various hypertension-related information and hypertension type information, and the model adjusts the parameters of the model according to the output evaluation result and the various hypertension-related information and hypertension type information in the sample data in the training process, thereby enabling the output classification result to be more accurate.

Description

Hypertension classification method and device based on fundus images
Technical Field
The invention relates to the field of medical image analysis, in particular to a hypertension classification method and equipment based on fundus images.
Background
In recent years, machine learning techniques have been widely used in the medical field, and in particular, machine learning techniques typified by deep learning have been attracting attention in the medical imaging field. In the aspect of fundus image detection, the deep learning technology can accurately detect a certain characteristic of a fundus image, for example, a deep learning model is trained by using a large number of fundus image samples of diabetics, and the trained model can be used for detecting diabetes of the fundus image.
Chinese patent application No. 201810387302.7 discloses a fundus image detection method based on machine learning, which performs feature detection with high significance on the entire region of a fundus image to thereby complete apparent disease feature recognition, and then performs feature detection with low significance on a specific region to complete further disease feature recognition. The method classifies fundus images with various characteristics greatly, performs fine detection of subareas in images without obvious characteristics, performs detection in series step by step, independently outputs detection results, and can realize accurate detection of obvious characteristics and tiny characteristics at the same time. The scheme is suitable for simultaneously identifying a plurality of disease types, such as detecting whether the fundus image has a plurality of possibly irrelevant disease characteristics such as glaucoma, macular hole, diabetes and the like to obtain corresponding classification, and is not suitable for the classification of a single disease.
Disclosure of Invention
In view of this, the present invention provides a method for constructing a classification model of hypertension, comprising:
acquiring sample data comprising fundus images, various hypertension related information and hypertension type information; training a machine learning model by using a large amount of sample data to output an evaluation result, wherein the evaluation result at least comprises a hypertension classification result corresponding to the hypertension type information, and the machine learning model comprises a feature extraction network and at least one output network, wherein the feature extraction network is used for extracting feature information from the fundus image, and the at least one output network is used for outputting the evaluation result according to the feature information; and the machine learning model adjusts parameters of the machine learning model according to the output evaluation result and various hypertension related information and hypertension type information in the sample data.
Optionally, the machine learning model has only one output network for outputting the hypertension classification result according to the feature information and the hypertension-related information.
Optionally, there are a plurality of output networks, where one of the output networks is configured to output the classification result of hypertension, and the other output networks are configured to output identification results corresponding to the plurality of types of information related to hypertension, respectively;
the machine learning model adjusts parameters of the machine learning model according to the output evaluation result and various hypertension related information and hypertension type information in sample data, and the method comprises the following steps:
determining a second loss value according to the difference between the hypertension classification result and the hypertension type information in the sample data;
determining a third loss value according to the difference between the identification result and the hypertension related information in the sample data;
determining a first loss value according to the second loss value and the third loss value;
and the machine learning model adjusts self parameters according to the first loss value.
Optionally, the other output networks comprise classification networks and/or regression networks;
the identification result output by the classification network is a classification result, and the difference between the classification result and corresponding hypertension related information in the sample data is represented by a cross entropy function;
and the identification result output by the regression network is a numerical value, and the difference between the regression result and the corresponding hypertension related information in the sample data is represented by an error function.
Optionally, the machine learning model adjusts a parameter of the feature extraction network at least according to the first loss value.
Optionally, the machine learning model further adjusts parameters of the corresponding output network according to the second loss value, and adjusts parameters of the corresponding output network according to the third loss value, respectively.
Optionally, the hypertension-related information comprises a systolic pressure and/or a diastolic pressure.
Optionally, the hypertension-related information further includes part or all of age, gender, BMI.
The invention also provides a hypertension classification method, which is characterized by comprising the following steps: acquiring a fundus image of a user; and identifying the fundus images by using the machine learning model trained by the construction method, and at least outputting a hypertension classification result.
Optionally, the machine learning model further outputs a plurality of hypertension-related information from the fundus image.
Optionally, the hypertension-related information comprises a systolic pressure and/or a diastolic pressure.
Optionally, the hypertension-related information further includes part or all of age, gender, BMI.
Correspondingly, the invention also provides hypertension classification model construction equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the above-mentioned hypertension classification model construction method.
Correspondingly, the invention also provides hypertension classification equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the above-described hypertension classification method.
According to the method for constructing the hypertension classification model, the model with learning capacity and composed of the neural network is trained by acquiring the sample data composed of the fundus image, the various hypertension related information and the hypertension type information, the evaluation result output by the model is compared with the label of the sample data in the training process, the parameter of the model is optimized according to the difference, so that the model can learn the relation between the hypertension related information, the hypertension type information and the content presented by the fundus image, and the constructed model can obtain the classification result of the hypertension through the fundus image.
According to the hypertension classification method provided by the invention, the eyeground image of the user is obtained, the characteristics of the eyeground image are extracted by using the machine learning model, and the classification result for representing the hypertension type of the user can be output according to the characteristics. The scheme does not need to collect blood samples or other body indexes of a user, only needs to collect fundus images, realizes a noninvasive classification process, and optimizes classification performance by introducing various hypertension related information into the model in a training process, so that the output classification result is more accurate. A user can determine the type of hypertension in a short time by using electronic equipment such as a computer, a smart phone, a server and the like, and the method has strong convenience and stability; in addition, professional medical equipment such as blood sample collecting equipment does not need to be introduced, and doctors or professional researchers do not need to participate, so that the cost of hypertension classification can be reduced, and reliable reference information is provided for doctors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a machine learning model training in an embodiment of the present invention;
FIG. 2 is a diagram illustrating another machine learning model training in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a machine learning model for classifying hypertension according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating another machine learning model for classifying hypertension according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
One embodiment of the present invention provides a method for constructing a hypertension classification model, which relates to a machine learning model, and as shown in fig. 1, the model includes a feature extraction network 11 and a plurality of output networks 12. The network described in the present application refers to a neural network, in particular, a convolutional neural network. Both the classification network and the regression network described below require a feature extraction structure consisting of a convolutional layer, a pooling layer, and an activation layer. One or more feature extraction structures may form a feature extraction network, and the network extracted features may be input into a subsequent output network. The network has initialized parameters that are trained in this embodiment using sample data to optimize them, thereby enabling the network to output a hypertension classification result.
First, sample data including a fundus image, a plurality of hypertension-related information, and hypertension type information is acquired. By way of example, the hypertension-related information is for example the systolic and/or diastolic blood pressure.
The hypertension type information may be at least two types of information, i.e., hypertension or non-hypertension, and may further include specific types of hypertension, such as primary hypertension, secondary hypertension, etc., which are categories based on etiology, or hypertension 1, 2, and 3, which are categories based on blood pressure values.
Sample data should be taken from the real subject, such as hypertension related information of a person can be obtained by reading physical examination data or case data, or the information can be collected by means of questionnaire. The fundus image is an image taken by a fundus camera, and includes images of organs such as the macula lutea, optic disc, and blood vessels. The retinal blood vessels of the fundus are blood vessels visible in the human body and are considered as windows for understanding the blood vessels of other organs. Therefore, the characteristics of the fundus blood vessel reflect the state of some organs to some extent, and a fundus image is very closely linked to hypertension.
In practical applications, some preprocessing may be performed on the fundus image, for example, edges may be cropped, the size may be adjusted, the contrast may be enhanced, and the like, to normalize the data content and improve the image quality.
The fundus image, various hypertension-related information, and hypertension type information in one sample data should be from the same subject. To train a machine learning model, a large amount of sample data should be acquired and divided into training data and test data.
And then, training the machine learning model by using a large amount of sample data, and enabling the machine learning model to output an evaluation result. In this embodiment, the evaluation result output by the model includes a hypertension classification result corresponding to the hypertension type information and an identification result corresponding to each piece of hypertension-related information, that is, the hypertension-related information and the hypertension type information in the sample data are used as tags, and the output content of the model corresponds to the tags. For example, if one sample data includes a fundus image P and hypertension-related information a, hypertension-related information B, and hypertension type information X, the fundus image P has A, B, X three labels, and the model recognizes the fundus image P to output a recognition result a ' corresponding to a, outputs a recognition result B ' corresponding to B, and outputs a classification result X ' corresponding to X.
The feature extraction network 11 shown in fig. 1 is for extracting feature information from the fundus image, and the plurality of output networks 12 are for outputting evaluation results corresponding to the hypertension-related information and the hypertension type information, respectively, based on the feature information. The output networks share the same characteristic extracted by the extraction network, and output the result corresponding to the label. For example, the feature extraction network 11 extracts the fundus image P to obtain feature information featurepAnd the first output network is according to featurepOutputting the evaluation result A 'corresponding to the tag A, and outputting the evaluation result A' by the second output network according to featurepOutputting evaluation result B' corresponding to label B, and outputting third output network according to featurepThe evaluation result X' corresponding to the label X is output.
And the machine learning model adjusts the parameters of the machine learning model according to the output evaluation result and the difference between the hypertension related information and the hypertension type information in the sample data. According to the characteristics of the neural network, the model needs to determine a loss value (loss) according to the difference, and optimizes the parameters of the model by reverse transmission so as to reduce the difference.
According to the method for constructing the hypertension classification model, the model with learning capacity and composed of the neural network is trained by acquiring the sample data composed of the fundus image, the various hypertension related information and the hypertension type information, the evaluation result output by the model is compared with the label of the sample data in the training process, the parameter of the model is optimized according to the difference, so that the model can learn the relation between the hypertension related information, the hypertension type information and the content presented by the fundus image, and the constructed model can obtain the classification result of the hypertension through the fundus image.
The expression of the differences may be different or the same for different kinds of assessment results and labels. In one embodiment, the evaluation results all belong to classification results, including two or more classifications, i.e. the plurality of output networks may all be classification networks. The output network (or output structure) of a classification network typically contains an output layer of a Softmax or Sigmoid function, with the output typically being a confidence or probability of 0-1 to describe the probability or confidence that the input belongs to a certain class or classes. For example, if a piece of hypertension-related information is "sex" information, and 0 and 1 are used to represent a male and a female, respectively, the output network will perform two classifications based on the feature information, and output a value of 0-1 to represent the classification result, and the multi-classification problem is similar to this.
Preferably, the difference between the classification result and the label can be represented using a cross-entropy function, such as
Figure BDA0002350602180000051
Thus, the loss value of each classification result and the corresponding label can be obtained. In other possible embodiments, the difference between the classification result and the label may also be represented using a log-likelihood function, an exponential loss function, or a quadratic loss function. In one embodiment, the evaluation results all belong to regression prediction results, that is, the output networks may all be regression prediction networks, and the result output by the network is a numerical value. The output of the regression network is a numerical prediction value for a specific index, such as: age, sex, systolic blood pressure, etc. The output network of the network at least comprises a fully-connected layer for weighting and outputting the input, and an activation layer can be added to change the response value of the input, such as outputting a value less than 0 to 0 and outputting a value greater than 0 to be unchanged by using Relu as an activation function. For example, if a piece of hypertension-related information is age, i.e., a label is an age value, the output network performs regression prediction based on the feature information and outputs a value, i.e., a predicted age.
The difference between the regression prediction result and the label can be expressed by using an Error function, such as Mean square Error function (MSE), Mean Absolute Error function (MAE), Mean Absolute Percentage Error function (MAPE), and the like. Thus, the loss value of each regression prediction result and the corresponding label can be obtained.
In a preferred embodiment, the evaluation result includes both the classification result and the regression prediction result, and the plurality of output networks includes both the classification network and the regression prediction network, and these two results respectively express the difference between the classification network and the regression prediction network in the above manner, so as to determine the respective loss values. According to the preferred scheme, the proper types of networks are respectively adopted for various types of labels and output, and the accuracy of the training result is improved by combining the advantages of a classification network and a regression network.
The various hypertension-related information in the sample data used includes at least systolic and/or diastolic blood pressure, and may further include part or all of age, sex, BMI (Body Mass Index).
The evaluation results output by the hypertension model are the respective results corresponding to the labels, and these recognition targets can be converted into classification problems or regression prediction problems, for example, the age can be the regression prediction result, or the age can be segmented and converted into the classification result.
According to the model structure in this embodiment, a loss value, such as a loss value L of age, can be calculated for each evaluation result and the difference between the evaluation result and the corresponding labelageLoss value L of systolic pressureSBPDiastolic blood pressure loss value LDBPSex loss value LgenderEtc., these loss values corresponding to the information relating to hypertension may be referred to as third loss values, which are plural; classifying the loss value L of the hypertensionDBReferred to as the second loss value. When the model adjusts the parameters according to these loss values, a total loss value (first loss value) L can be calculated according to these loss valuestotI.e. Ltot=f(Lage,LSBP,LDBP,Lgender,LDB…), and the influence of these third and second loss values on the total loss value may be different, for example, the influence weight of each loss value may be set, and the total loss value may be calculated by linear or non-linear weighting. The final model adjusts the parameters according to the calculated total loss value.
Further, the machine learning model adjusts feature extraction based at least on the first loss valueAnd parameters of the network 11 are optimized, so that the characteristic extraction performance is optimized, the obtained characteristic information is more accurate, and the accuracy of the output network is improved. The parameters of the respective output network 12 may also be adjusted according to the first loss value, or the parameters of the respective output network may be adjusted according to a plurality of third loss values and second loss values, respectively. For example, for an age output network, the loss value L may be based on ageageAdjusting parameters of the network, and outputting the hypertension classification result to the network according to the loss value L of the hypertension classification resultDBParameters of the network are adjusted.
In the preferred embodiment, the parameters of the network are extracted through the total loss value optimization features, and the parameters of the corresponding output network are optimized through the loss values corresponding to various evaluation results, so that the efficiency and the performance of model training can be improved.
In another embodiment of the present invention, a machine learning model is provided, as shown in fig. 2, which includes a feature extraction network 11 and a unique output network 12 for outputting a hypertension classification result according to feature information extracted by the feature extraction network 11 and various hypertension-related information such as systolic pressure and diastolic pressure, and according to a loss value LDBAdjusting the parameters of the device.
The machine learning model of the embodiment has a unique output network, uses various hypertension-related information as auxiliary information, and combines the model with the auxiliary information to learn the relationship between the hypertension classification information and the fundus image, thereby improving the classification efficiency and accuracy.
One embodiment of the present invention provides a hypertension classification method, which relates to a machine learning model, and the model can be obtained by training using the method described in the above embodiment. Referring to fig. 3, the method of the present embodiment includes: the method comprises the steps of acquiring a fundus image 20 of a user, identifying the fundus image 20 by using a machine learning model 21, and outputting various hypertension-related information and hypertension classification results. The machine learning model 21 includes a feature extraction network 211 for extracting feature information from the fundus image, and a plurality of output networks 212, one of which 212 is for outputting hypertension classification information, and the other of which is for outputting hypertension-related information based on the feature information, respectively. The fundus image 20 is an image taken by a fundus camera, and includes images of organs such as the macula lutea, optic disc, and blood vessels. The output network includes a classification network and/or a regression network.
The information related to hypertension may include at least systolic and/or diastolic blood pressure, and may further include part or all of age, sex, and BMI (Body Mass Index).
The hypertension type information may be at least two types of information, i.e., hypertension or non-hypertension, and may further include specific types of hypertension, such as primary hypertension, secondary hypertension, etc., which are categories based on etiology, or hypertension 1, 2, and 3, which are categories based on blood pressure values.
According to the hypertension classification method provided by the embodiment of the invention, the fundus image of the user is obtained, the characteristics of the fundus image are extracted by using the machine learning model, the classification result for representing the hypertension type of the user can be output according to the characteristics, and meanwhile, other information related to the hypertension can be obtained. The scheme does not need to collect blood samples or other body indexes of a user, only needs to collect fundus images, realizes a noninvasive classification process, and optimizes classification performance by introducing various hypertension related information into the model in a training process, so that the output classification result is more accurate. The hypertension type of the subject can be determined in a short time by using electronic equipment such as a computer, a smart phone, a server and the like, and the method has strong convenience and stability; in addition, doctors or professional researchers do not need to participate in the scheme, so that the cost of hypertension classification can be reduced, and reliable reference information is provided for the doctors.
One embodiment of the present invention provides a hypertension classification method, which involves a machine learning model, and the model can be obtained by training using the method described in the above embodiment. The method of the embodiment comprises the following steps: a fundus image of the user is acquired, and as shown in fig. 4, the fundus image 20 is an image taken by a fundus camera, and includes images of organs such as the macula lutea, the optic disc, and blood vessels.
And identifying the fundus image by using a machine learning model, and outputting a hypertension classification result. The machine learning model in this embodiment includes a feature extraction network 211 and a unique output network 212. Wherein the feature extraction network 211 is used for extracting feature information from the fundus image, and the output network 212 is used for outputting a hypertension classification result according to the feature information. Alternatively, the user may provide one or more types of hypertension-related information that are easily collected, such as some non-invasively collected information, and the output network 212 may output the hypertension classification results in combination with the characteristic information and the various types of hypertension-related information.
Because the model does not have other output networks, other related information is not presented during classification, but the performance is optimized according to various hypertension related information during model training, so that the output classification result is more accurate, the result is concise and intuitive, and the model is suitable for application scenes of rapid evaluation.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (14)

1. A method for constructing a hypertension classification model is characterized by comprising the following steps:
acquiring sample data comprising fundus images, various hypertension related information and hypertension type information; training a machine learning model by using a large amount of sample data to output an evaluation result, wherein the evaluation result at least comprises a hypertension classification result corresponding to the hypertension type information, and the machine learning model comprises a feature extraction network and at least one output network, wherein the feature extraction network is used for extracting feature information from the fundus image, and the at least one output network is used for outputting the evaluation result according to the feature information; and the machine learning model adjusts parameters of the machine learning model according to the output evaluation result and various hypertension related information and hypertension type information in the sample data.
2. The method of claim 1, wherein the machine learning model has only one output network for outputting the hypertension classification result according to the feature information and the hypertension-related information.
3. The method according to claim 1, wherein there are a plurality of output networks, one of which is used for outputting the hypertension classification result, and the other of which is used for outputting the identification results corresponding to the plurality of types of hypertension-related information, respectively;
the machine learning model adjusts parameters of the machine learning model according to the output evaluation result and various hypertension related information and hypertension type information in sample data, and the method comprises the following steps:
determining a second loss value according to the difference between the hypertension classification result and the hypertension type information in the sample data;
determining a third loss value according to the difference between the identification result and the hypertension related information in the sample data;
determining a first loss value according to the second loss value and the third loss value;
and the machine learning model adjusts self parameters according to the first loss value.
4. The method of claim 3, wherein the other output networks comprise classification networks and/or regression networks;
the identification result output by the classification network is a classification result, and the difference between the classification result and corresponding hypertension related information in the sample data is represented by a cross entropy function;
and the identification result output by the regression network is a numerical value, and the difference between the regression result and the corresponding hypertension related information in the sample data is represented by an error function.
5. The method of claim 3, wherein the machine learning model adjusts parameters of the feature extraction network based at least on the first loss value.
6. The method of claim 5, wherein the machine learning model further adjusts parameters of the corresponding output network based on the second loss value and adjusts parameters of the corresponding output network based on the third loss value, respectively.
7. The method according to any of claims 1-6, wherein the hypertension-related information includes systolic and/or diastolic blood pressure.
8. The method of claim 7, wherein the hypertension-related information further includes some or all of age, gender, BMI.
9. A method of classifying hypertension, comprising: acquiring a fundus image of a user; identifying the fundus image using a machine learning model constructed by the method of any one of claims 1-8, outputting at least a hypertension classification result.
10. The method of claim 9, wherein the machine learning model further outputs a plurality of hypertension-related information from the fundus image.
11. The method according to claim 10, wherein the hypertension-related information includes systolic and/or diastolic blood pressure.
12. The method of claim 11, wherein the hypertension-related information further includes some or all of age, gender, BMI.
13. A hypertension classification model construction device is characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of constructing a hypertension classification model according to any one of claims 1-8.
14. A hypertension classification device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of hypertension classification according to any one of claims 9-12.
CN201911413567.0A 2019-12-31 2019-12-31 Hypertension classification method and device based on fundus images Pending CN111047590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911413567.0A CN111047590A (en) 2019-12-31 2019-12-31 Hypertension classification method and device based on fundus images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911413567.0A CN111047590A (en) 2019-12-31 2019-12-31 Hypertension classification method and device based on fundus images

Publications (1)

Publication Number Publication Date
CN111047590A true CN111047590A (en) 2020-04-21

Family

ID=70241093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911413567.0A Pending CN111047590A (en) 2019-12-31 2019-12-31 Hypertension classification method and device based on fundus images

Country Status (1)

Country Link
CN (1) CN111047590A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539480A (en) * 2020-04-27 2020-08-14 上海鹰瞳医疗科技有限公司 Multi-class medical image identification method and equipment
CN113017831A (en) * 2021-02-26 2021-06-25 上海鹰瞳医疗科技有限公司 Method and equipment for predicting arch height after artificial lens implantation
CN113689954A (en) * 2021-08-24 2021-11-23 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835150A (en) * 2015-04-23 2015-08-12 深圳大学 Learning-based eyeground blood vessel geometric key point image processing method and apparatus
CN110135528A (en) * 2019-06-13 2019-08-16 上海鹰瞳医疗科技有限公司 Age determines that method, eye health degree determine method and apparatus
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835150A (en) * 2015-04-23 2015-08-12 深圳大学 Learning-based eyeground blood vessel geometric key point image processing method and apparatus
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning
CN110135528A (en) * 2019-06-13 2019-08-16 上海鹰瞳医疗科技有限公司 Age determines that method, eye health degree determine method and apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539480A (en) * 2020-04-27 2020-08-14 上海鹰瞳医疗科技有限公司 Multi-class medical image identification method and equipment
CN111539480B (en) * 2020-04-27 2023-10-17 上海鹰瞳医疗科技有限公司 Multi-category medical image recognition method and equipment
CN113017831A (en) * 2021-02-26 2021-06-25 上海鹰瞳医疗科技有限公司 Method and equipment for predicting arch height after artificial lens implantation
CN113689954A (en) * 2021-08-24 2021-11-23 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
Li et al. Computer‐assisted diagnosis for diabetic retinopathy based on fundus images using deep convolutional neural network
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
Wu et al. Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation
CN109544518B (en) Method and system applied to bone maturity assessment
CN111048210B (en) Method and equipment for evaluating disease risk based on fundus image
CN117457229B (en) Anesthesia depth monitoring system and method based on artificial intelligence
CN114549469A (en) Deep neural network medical image diagnosis method based on confidence degree calibration
CN111047590A (en) Hypertension classification method and device based on fundus images
CN117952964B (en) Fundus medical image analysis method based on computer vision technology
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification
CN117542474A (en) Remote nursing monitoring system and method based on big data
CN111028232A (en) Diabetes classification method and equipment based on fundus images
Das et al. Automated classification of retinal OCT images using a deep multi-scale fusion CNN
CN112052874B (en) Physiological data classification method and system based on generation countermeasure network
Leopold et al. Segmentation and feature extraction of retinal vascular morphology
CN112990270B (en) Automatic fusion method of traditional feature and depth feature
Zamzmi et al. Evaluation of an artificial intelligence-based system for echocardiographic estimation of right atrial pressure
CN117352164A (en) Multi-mode tumor detection and diagnosis platform based on artificial intelligence and processing method thereof
CN116564505A (en) Thyroid disease screening method, system, equipment and storage medium based on deep learning
CN112562819B (en) Report generation method of ultrasonic multi-section data for congenital heart disease
Chowdary et al. Multiple Disease Prediction by Applying Machine Learning and Deep Learning Algorithms
CN115035339A (en) Cystoscope image classification method based on artificial intelligence
CN114557670A (en) Physiological age prediction method, apparatus, device and medium
Kandel Deep Learning Techniques for Medical Image Classification
Randive et al. A self-adaptive optimisation for diabetic retinopathy detection with neural classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210726

Address after: 100083 room 21, 4th floor, building 2, national defense science and Technology Park, beipolytechnic, Haidian District, Beijing

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Applicant after: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 200030 room 01, 8 building, 1 Yizhou Road, Xuhui District, Shanghai, 180

Applicant before: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200421