CN110751636B - Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network - Google Patents
Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network Download PDFInfo
- Publication number
- CN110751636B CN110751636B CN201910966530.4A CN201910966530A CN110751636B CN 110751636 B CN110751636 B CN 110751636B CN 201910966530 A CN201910966530 A CN 201910966530A CN 110751636 B CN110751636 B CN 110751636B
- Authority
- CN
- China
- Prior art keywords
- arterial
- fundus
- retinal arteriosclerosis
- reflective
- arteriosclerosis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010003210 Arteriosclerosis Diseases 0.000 title claims abstract description 75
- 208000011775 arteriosclerosis disease Diseases 0.000 title claims abstract description 75
- 230000002207 retinal effect Effects 0.000 title claims abstract description 67
- 238000001514 detection method Methods 0.000 title claims abstract description 38
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 34
- 238000005070 sampling Methods 0.000 claims abstract description 28
- 210000001367 artery Anatomy 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 238000012216 screening Methods 0.000 claims abstract description 7
- 210000003462 vein Anatomy 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 3
- 230000003902 lesion Effects 0.000 abstract 1
- 230000011514 reflex Effects 0.000 abstract 1
- 230000002792 vascular Effects 0.000 abstract 1
- 238000011176 pooling Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009885 systemic effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Eye Examination Apparatus (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种基于改进型编解码网络的眼底图像视网膜动脉硬化检测方法,解决了传统方法无法准确分割眼底动静脉血管及动脉反光带的问题,且减少了光线及其他眼底组织特征对动脉反光带分割带来的干扰,提高了特征和梯度信息的传输效率,实现了高准确率的分割,确定了视网膜动脉硬化检测阈值,实现了高准确率视网膜动脉硬化检测。属于图像处理、深度学习和医疗影像等领域。The invention relates to a method for detecting retinal arteriosclerosis in fundus images based on an improved encoding and decoding network, which solves the problem that the traditional method cannot accurately segment fundus arteriovenous vessels and arterial reflective bands, and reduces the reflection of light and other fundus tissue characteristics on arteries The interference brought by band segmentation improves the transmission efficiency of feature and gradient information, realizes high-accuracy segmentation, determines the detection threshold of retinal arteriosclerosis, and realizes high-accuracy detection of retinal arteriosclerosis. It belongs to the fields of image processing, deep learning and medical imaging.
背景技术Background technique
视网膜动脉硬化情况与全身动脉硬化程度有关。由于眼底血管是唯一可直接进行无创观察的血管,所以定期进行视网膜动脉硬化检测可了解全身动脉硬化的程度,协助医生提前对全身动脉硬化的持续加重进行干预。视网膜动脉硬化会对眼底动脉血管的管径和动脉血管反光带的反光程度产生影响。目前,医院主要使用眼底相机观察人眼情况。当出现动脉硬化时,眼底图像中动脉反光带加宽,血柱颜色变成金属亮铜色;当动脉硬化持续加重时,血管呈白色银丝反光。Retinal arteriosclerosis is related to the degree of systemic arteriosclerosis. Since the fundus blood vessels are the only blood vessels that can be directly observed non-invasively, regular detection of retinal arteriosclerosis can help understand the degree of systemic arteriosclerosis, and assist doctors to intervene in advance for the continuous aggravation of systemic arteriosclerosis. Retinal arteriosclerosis will affect the diameter of the fundus arteries and the degree of reflection of the arterial reflective tape. At present, hospitals mainly use fundus cameras to observe the condition of human eyes. When arteriosclerosis occurs, the reflective band of the artery in the fundus image will widen, and the color of the blood column will turn into metallic bright copper; when the arteriosclerosis continues to aggravate, the blood vessels will appear white and silvery.
临床上,医生凭借经验观察患者眼底图像,根据图中动脉反光带宽度相较于动脉血管宽度是否增加、动脉反光带反光程度是否增强来判断患者是否存在动脉硬化,因此本发明将动脉血管与动脉反光带宽度比和灰度比作为动脉硬化检测的依据。由于视网膜动脉硬化检测需要对动脉血管及动脉反光带进行灰度拟合,因此需要提取眼底图像中的动脉血管及动脉反光带。虽然眼底图像血管的分割问题已被广泛关注,但是对于动脉血管及动脉反光带分割的研究偏少。同时,由于静脉血管的形状、走势与动脉血管相似,当眼底图像拍摄质量不佳时,可能会对动脉血管的分割产生干扰,且动脉反光带的形状呈细长型,极易受到周围光线影响,因此动脉血管及动脉反光带分割难度略大。Clinically, the doctor observes the patient's fundus image based on experience, and judges whether the patient has arteriosclerosis according to whether the width of the arterial reflective band in the picture is increased compared with the width of the arterial vessel, and whether the degree of reflection of the arterial reflective band is enhanced. The reflective tape width ratio and gray scale ratio are used as the basis for arteriosclerosis detection. Since the detection of retinal arteriosclerosis requires gray-scale fitting of arterial vessels and arterial reflective bands, it is necessary to extract the arterial vessels and arterial reflective bands in the fundus image. Although the segmentation of blood vessels in fundus images has been widely concerned, there are few studies on the segmentation of arteries and arterial reflective bands. At the same time, because the shape and trend of veins are similar to arteries, when the quality of fundus images is not good, it may interfere with the segmentation of arteries, and the shape of arterial reflective tape is elongated, which is easily affected by surrounding light , so the segmentation of arterial vessels and arterial reflective bands is slightly more difficult.
针对以上问题,本发明提出一种基于改进型编解码网络的眼底图像视网膜动脉硬化检测方法,可用于大规模的视网膜动脉硬化筛查。In view of the above problems, the present invention proposes a method for detecting retinal arteriosclerosis in fundus images based on an improved codec network, which can be used for large-scale screening of retinal arteriosclerosis.
发明内容Contents of the invention
本发明提出了一种基于改进型编解码网络的眼底图像视网膜动脉硬化检测方法。首先,收集眼科专家已诊断为视网膜动脉硬化的眼底图像,取一条垂直于该眼底图像中动脉血管及动脉反光带的采样直线,对于采样直线上的像素点,利用四段高斯模型进行拟合,得到血管横截面的灰度分布曲线,然后进行反光参数一带宽比及灰度比的计算,确定视网膜动脉硬化定量检测阈值;之后,分别对眼底图像训练集中的动静脉血管和动脉反光带进行标注,可减少由于动静脉血管特征相似所造成的干扰;基于改进型编解码网络在编码部分借助4个Inception Resnet V2模块提取多尺度的特征信息,并在每个特征提取模块后加入残差注意力机制模块以增强目标特征信息、提高网络性能,在解码部分利用四个上采样层生成稀疏特征图,再经过四个卷积层生成密集特征映射图,并通过SoftMax实现每个像素的分类;之后,利用Canny边缘检测算子寻找反光带全部连通域并筛选出最大连通域,取3条垂直于该区域的采样直线;对于3组采样像素点,利用四段高斯模型分别进行拟合,得到3条血管横截面的灰度分布曲线;最后根据拟合结果,计算3组动脉血管与动脉反光带的反光参数一灰度比及带宽比。将三组反光参数与检测阈值进行比较,存在三种情况: (1)当三组反光参数均小于阈值时,则判断该眼底不存在视网膜动脉硬化;(2)当三组反光参数中有一组参数大于阈值,则判断该眼底存在视网膜动脉硬化;(3)当出现其他情况时,则判断该眼底为疑似视网膜动脉硬化。The invention proposes a method for detecting retinal arteriosclerosis in fundus images based on an improved codec network. First, collect the fundus images diagnosed as retinal arteriosclerosis by ophthalmologists, take a sampling line perpendicular to the arteries and arterial reflective bands in the fundus image, and use the four-segment Gaussian model to fit the pixels on the sampling line. Obtain the gray distribution curve of the blood vessel cross-section, and then calculate the reflection parameter-bandwidth ratio and gray ratio to determine the quantitative detection threshold of retinal arteriosclerosis; after that, the arteriovenous vessels and arterial reflective bands in the fundus image training set were respectively Annotation can reduce the interference caused by similar characteristics of arteries and veins; based on the improved codec network, extract multi-scale feature information with the help of 4 Inception Resnet V2 modules in the coding part, and add residual attention after each feature extraction module The force mechanism module is used to enhance target feature information and improve network performance. In the decoding part, four upsampling layers are used to generate sparse feature maps, and then four convolution layers are used to generate dense feature maps, and each pixel is classified through SoftMax; After that, the Canny edge detection operator is used to find all the connected domains of the reflective belt and screen out the largest connected domain, and three sampling straight lines perpendicular to the area are taken; for the three groups of sampling pixels, the four-segment Gaussian model is used to fit them respectively, and the obtained The gray distribution curves of the three cross-sections of blood vessels; finally, according to the fitting results, the reflective parameters—gray ratio and bandwidth ratio—of the three groups of arterial vessels and arterial reflective tapes were calculated. Comparing the three sets of reflective parameters with the detection threshold, there are three situations: (1) When the three sets of reflective parameters are all less than the threshold, it is judged that there is no retinal arteriosclerosis in the fundus; (2) When one of the three sets of reflective parameters has a If the parameter is greater than the threshold, it is judged that there is retinal arteriosclerosis in the fundus; (3) when other conditions occur, it is judged that the fundus is suspected of retinal arteriosclerosis.
实现本发明的技术方案,包括下列步骤:Realize the technical scheme of the present invention, comprise the following steps:
步骤1:收集眼科专家已诊断为视网膜动脉硬化的眼底图像,取一条垂直于该眼底图像中动脉血管及动脉反光带的采样直线,对于采样直线上的像素点,利用四段高斯模型进行拟合,得到血管横截面的灰度分布曲线,然后进行反光参数一带宽比及灰度比的计算,确定视网膜动脉硬化定量检测阈值;Step 1: Collect fundus images that have been diagnosed as retinal arteriosclerosis by ophthalmologists, take a sampling line perpendicular to the arteries and arterial reflective bands in the fundus image, and use a four-segment Gaussian model to fit the pixels on the sampling line , to obtain the gray distribution curve of the blood vessel cross-section, and then calculate the reflection parameter-bandwidth ratio and gray ratio to determine the quantitative detection threshold of retinal arteriosclerosis;
步骤2:用不同颜色分别标注眼底图像中的动静脉血管及动脉反光带,然后将所有眼底图像根据网络的输入要求分成224×224的图像块,并作为改进型编解码网络的训练集;Step 2: mark the arteriovenous vessels and arterial reflective tapes in the fundus image with different colors, and then divide all the fundus images into 224×224 image blocks according to the input requirements of the network, and use them as the training set of the improved codec network;
步骤3:利用改进型编解码网络分割眼底图像的动脉血管及动脉反光带,并筛选得到用于后续检测的动脉反光带有效区域;Step 3: Use the improved codec network to segment the arterial vessels and arterial reflective bands of the fundus image, and screen to obtain the effective area of the arterial reflective bands for subsequent detection;
步骤4:在有效区域内取三条垂直于动脉血管及动脉反光带的采样直线,对于采样直线上的像素点,利用四段高斯模型进行拟合,分别得到3条血管横截面的灰度分布曲线,根据拟合结果分别计算得到3组动脉血管及动脉反光带的边界坐标和平均灰度值;Step 4: Take three sampling straight lines perpendicular to the arterial vessels and arterial reflective tape in the effective area, and use four-segment Gaussian model to fit the pixel points on the sampling straight lines to obtain gray distribution curves of three blood vessel cross-sections respectively , according to the fitting results, the boundary coordinates and average gray value of the three groups of arterial vessels and arterial reflective tapes were calculated respectively;
步骤5:分别计算3组眼底图像动脉血管及动脉反光带的反光参数一带宽比和灰度比,与视网膜动脉硬化定量检测阈值比较,判断患者是否患有视网膜动脉硬化。Step 5: Calculate the reflective parameters of arterial vessels and arterial reflective bands in the three groups of fundus images—bandwidth ratio and gray ratio, and compare with the quantitative detection threshold of retinal arteriosclerosis to determine whether the patient has retinal arteriosclerosis.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
通过利用深度学习技术,解决了传统方法无法准确分割眼底动静脉血管及动脉反光带的问题,且减少了光线及其他眼底组织特征对动脉反光带分割带来的干扰,提高了特征和梯度信息的传输效率,实现了动脉血管及动脉反光带的准确分割,确定了视网膜动脉硬化定量检测阈值,实现了高准确率的视网膜动脉硬化检测。By using deep learning technology, it solves the problem that traditional methods cannot accurately segment fundus arteriovenous vessels and arterial reflective bands, and reduces the interference caused by light and other fundus tissue characteristics on the segmentation of arterial reflective bands, and improves the accuracy of features and gradient information. The transmission efficiency realizes the accurate segmentation of arterial vessels and arterial reflective bands, determines the quantitative detection threshold of retinal arteriosclerosis, and realizes the detection of retinal arteriosclerosis with high accuracy.
附图说明Description of drawings
图1本发明的总体框架示意图;The overall framework schematic diagram of Fig. 1 the present invention;
图2改进型编解码网络的结构;The structure of Fig. 2 improved codec network;
图3 Inception Resnet V2模块的结构;Figure 3 The structure of the Inception Resnet V2 module;
图4残差注意力机制模块的结构;Figure 4 The structure of the residual attention mechanism module;
图5动脉反光带分割有效区域的筛选过程示例;Figure 5 Example of the screening process of the arterial reflective tape to segment the effective area;
图6动脉血管及动脉反光带像素点的灰度分布示例;Fig. 6 An example of the gray scale distribution of arterial blood vessels and arterial reflective tape pixels;
图7动脉血管及动脉反光带的高斯拟合示例。Figure 7 Gaussian fitting examples of arterial vessels and arterial reflective tape.
具体实施方式Detailed ways
下面结合具体实施方式对本发明做进一步详细描述。The present invention will be further described in detail below in conjunction with specific embodiments.
本发明的总体框架示意图如图1所示,首先,收集眼科专家已诊断为视网膜动脉硬化的眼底图像,取一条垂直于该眼底图像中动脉血管及动脉反光带的采样直线,对于采样直线上的像素点,利用四段高斯模型进行拟合,得到血管横截面的灰度分布曲线,然后进行反光参数一带宽比及灰度比的计算,确定视网膜动脉硬化定量检测阈值;其次,由于没有公开的关于眼底图像动静脉血管及动脉反光带的图像数据库,所以收集医院的眼底图像并由眼科专家使用标注工具手动标注得到训练样本;改进型编解码网络在编码部分借助4 个Inception Resnet V2模块提取多尺度的特征信息,并在每个特征提取模块后加入残差注意力机制模块以增强目标特征信息、提高网络性能,在解码部分利用四个上采样层生成稀疏特征图,再经过四个卷积层生成密集特征映射图,并通过SoftMax实现每个像素的分类,然后应用于眼底图像动脉血管及动脉反光带的分割;利用Canny算子检测动脉反光带边缘,并将动脉反光带最大连通域作为检测有效区域,取3条垂直于有效区域内动脉血管的采样直线;对于采样像素点,利用四段高斯函数分别进行拟合,得到3条血管横截面的灰度分布曲线,根据拟合结果计算得到3组动脉血管及动脉反光带的反光参数,将反光参数与检测阈值比较,存在三种情况:(1)当三组反光参数均小于阈值时,则判断该眼底不存在视网膜动脉硬化;(2)当三组反光参数中有一组参数大于阈值,则判断该眼底存在视网膜动脉硬化;(3)当出现其他情况时,则判断该眼底为疑似视网膜动脉硬化。The overall framework schematic diagram of the present invention is shown in Figure 1, at first, collect the fundus image that the ophthalmology expert has diagnosed as retinal arteriosclerosis, take a sampling straight line perpendicular to the arterial vessel and arterial reflective band in the fundus image, for the sampling straight line Pixel points are fitted with a four-segment Gaussian model to obtain the gray distribution curve of the blood vessel cross-section, and then the reflection parameter-bandwidth ratio and gray ratio are calculated to determine the quantitative detection threshold of retinal arteriosclerosis; The image database of arteriovenous vessels and arterial reflective tapes of fundus images, so the fundus images of the hospital are collected and the ophthalmologists use the labeling tool to manually mark the training samples; the improved codec network is extracted with the help of 4 Inception Resnet V2 modules in the coding part Multi-scale feature information, and a residual attention mechanism module is added after each feature extraction module to enhance target feature information and improve network performance. In the decoding part, four upsampling layers are used to generate sparse feature maps, and then through four volumes The dense feature map is generated by lamination, and the classification of each pixel is realized by SoftMax, and then applied to the segmentation of arterial vessels and arterial reflective bands in fundus images; the Canny operator is used to detect the edge of the arterial reflective band, and the maximum connected domain of the arterial reflective band As the detection effective area, three sampling straight lines perpendicular to the arterial vessels in the effective area were taken; for the sampling pixel points, four segments of Gaussian functions were used to fit them respectively, and three gray distribution curves of blood vessel cross-sections were obtained, according to the fitting results The reflective parameters of three groups of arterial vessels and arterial reflective tapes were calculated, and compared with the detection threshold, there were three situations: (1) When the three groups of reflective parameters were all less than the threshold, it was judged that there was no retinal arteriosclerosis in the fundus; (2) When one of the three sets of reflection parameters is greater than the threshold, it is judged that the fundus has retinal arteriosclerosis; (3) when other conditions occur, it is judged that the fundus is suspected of retinal arteriosclerosis.
下面结合附图,对本发明技术方案的具体实施过程加以说明。The specific implementation process of the technical solution of the present invention will be described below in conjunction with the accompanying drawings.
1.确定视网膜动脉硬化检测阈值1. Determination of Retinal Arteriosclerosis Detection Threshold
收集30幅眼科专家已诊断为视网膜动脉硬化的眼底图像,分别取一条垂直于该眼底图像中动脉血管及动脉反光带的采样直线,对于采样直线上的像素点,利用四段高斯模型进行拟合,得到血管横截面的灰度分布曲线,然后进行反光参数一带宽比及灰度比的计算,确定视网膜动脉硬化定量检测阈值:带宽比且灰度比 Collect 30 fundus images that have been diagnosed as retinal arteriosclerosis by ophthalmologists, and take a sampling line perpendicular to the arteries and arterial reflective bands in the fundus image, and use a four-segment Gaussian model to fit the pixels on the sampling line , to obtain the gray distribution curve of the blood vessel cross-section, and then calculate the reflection parameter-bandwidth ratio and gray ratio to determine the quantitative detection threshold of retinal arteriosclerosis: bandwidth ratio and the gray ratio
与定量检测阈值比较,存在三种情况:(1)当三组反光参数均小于阈值时,则判断该眼底不存在视网膜动脉硬化;(2)当三组反光参数中有一组参数大于阈值,则判断该眼底存在视网膜动脉硬化;(3)当出现其他情况时,则判断该眼底为疑似视网膜动脉硬化。Compared with the quantitative detection threshold, there are three situations: (1) when the three groups of reflective parameters are all less than the threshold, it is judged that there is no retinal arteriosclerosis in the fundus; (2) when one of the three groups of reflective parameters is greater than the threshold, then It is judged that there is retinal arteriosclerosis in the fundus; (3) when other conditions occur, it is judged that the fundus is suspected of retinal arteriosclerosis.
2.实验对象2. Subjects
本发明所使用的训练集包括918幅224×244的训练样本和168幅测试样本。由于动静脉血管特征相似,且动脉反光带形状特征呈细长型,因此容易被误检或漏检,但因为没有公开的眼底动静脉血管和动脉反光带的数据图库,所以需要使用图形标注工具手动标出训练样本中的动静脉血管及动脉反光带。The training set used in the present invention includes 918 training samples of 224×244 and 168 testing samples. Since the characteristics of arteries and veins are similar, and the shape of the arterial reflective tape is slender, it is easy to be misdetected or missed. However, because there is no public database of fundus arteriovenous blood vessels and arterial reflective tapes, it is necessary to use a graphical annotation tool Manually mark arteriovenous vessels and arterial reflective tapes in the training samples.
3.改进型编解码网络3. Improved codec network
本发明采用的改进型编解码网络的主体是一种编码-解码网络。编码部分的特征提取结构与VGG16网络相似,主要由4个卷积块组成,其中主要包括卷积层、批标准化、激活函数以及池化层。同时,借助了Inception Resnet V2模块使得编码部分为多尺度输入结构, Inception Resnet V2模块中的卷积层使用了尺寸为1×1、3×3、1×3、3×1、1×7和7×1的卷积核,目的是拓展网络宽度,提取图像不同尺度的特征信息。编码部分采用最大池化进行降采样以保留显著特征、降低特征维度,并加入填充以保持边界信息。另外,在最大池化层后加入残差注意力机制模块,使得网络可以选择聚焦的位置,增强该位置上的特征表示。解码部分的主体结构与SegNet网络的解码结构类似,采用了4个上采样块,包括卷积层、激活函数层、批标准化层以及上采样层,并在解码部分的最后加入SoftMax分类器,实现整体网络的分割,改进型编解码网络的结构如图2所示。The main body of the improved codec network used in the present invention is an encoding-decoding network. The feature extraction structure of the encoding part is similar to the VGG16 network, mainly composed of 4 convolutional blocks, which mainly include convolutional layers, batch normalization, activation functions, and pooling layers. At the same time, with the help of the Inception Resnet V2 module, the encoding part has a multi-scale input structure. The convolutional layer in the Inception Resnet V2 module uses a size of 1×1, 3×3, 1×3, 3×1, 1×7 and The purpose of the 7×1 convolution kernel is to expand the width of the network and extract feature information of different scales of the image. The encoding part uses maximum pooling for downsampling to retain salient features, reduce feature dimensions, and add padding to preserve boundary information. In addition, a residual attention mechanism module is added after the maximum pooling layer, so that the network can select the focused position and enhance the feature representation at this position. The main structure of the decoding part is similar to the decoding structure of the SegNet network. It uses 4 upsampling blocks, including convolution layer, activation function layer, batch normalization layer and upsampling layer, and adds a SoftMax classifier at the end of the decoding part to realize The segmentation of the overall network and the structure of the improved codec network are shown in Figure 2.
(1)编码部分(1) Coding part
从图像输入层到最后一个池化层属于编码部分,这一阶段主要采用InceptionResnet V2模块拓展网络宽度以提取多尺度的特征信息,残差注意力机制模块将网络聚焦于目标区域并增强网络性能。首先使用了5个Inception Resnet V2-A模块对224×224的图像进行特征提取,该模块的卷积核包括1×1和3×3,1×1的卷积操作是为了限制输入信道的数量,而两个连续的3×3卷积操作可看作是使用了5×5的卷积核进行的特征提取,然后利用1×1 的卷积操作使输入输出具有相同维度,并采用了残差连接的方式将特征融合;特征图经过最大池化层后尺寸下降为112×112,并作为残差注意力机制Stage1模块的输入,残差注意力机制Stage1模块是空间注意力模块(Spatial Attention),采用的是L2正则化约束每个位置上的所有通道,最终输出一个空间维度一致的注意力图;然后,连接5个Inception Resnet V2-B模块,该模块的卷积核分别是1×1、1×7和7×1,后两个卷积操作相当于使用了7×7的卷积核,将提取的特征进行融合后送入最大池化层可得到56×56的特征图;将 56×56的特征图送入残差注意力机制模块Stage2中,Stage2模块属于通道注意力模块,类似于SENet网络约束每一个通道上的所有特征值,使得输出长度与通道数相同的一维向量作为特权加权;之后,连接5个Inception Resnet V2-C模块,该模块使用的卷积核分别为1×1、1×3和3×1,后两个卷积操作相当于使用了3×3的卷积核,将特征图进行最大池化,得到28×28的特征图;连接残差注意力机制Stage3模块,该模块属于注意力集中模块,是对每个通道和每个空间使用Sigmoid引入非线性因素;最后,再次连接5个InceptionResnet V2-C模块提取高层特征,并连接最大池化层以降低特征图维度得到14×14的特征图。由于残差注意力机制是为了在较大特征图中将焦点聚于目标,因此当特征图尺寸已足够小时,不需要再添加残差注意力机制。From the image input layer to the last pooling layer belongs to the encoding part. In this stage, the InceptionResnet V2 module is mainly used to expand the network width to extract multi-scale feature information. The residual attention mechanism module focuses the network on the target area and enhances network performance. First, five Inception Resnet V2-A modules are used to extract features from 224×224 images. The convolution kernels of this module include 1×1 and 3×3. The 1×1 convolution operation is to limit the number of input channels. , and two consecutive 3×3 convolution operations can be regarded as feature extraction using a 5×5 convolution kernel, and then use a 1×1 convolution operation to make the input and output have the same dimension, and use a residual The feature is fused in the way of poor connection; the size of the feature map is reduced to 112×112 after the maximum pooling layer, and it is used as the input of the Stage1 module of the residual attention mechanism, which is a spatial attention module (Spatial Attention ), using L2 regularization to constrain all channels at each position, and finally output an attention map with consistent spatial dimensions; then, connect 5 Inception Resnet V2-B modules, the convolution kernels of which are 1×1 , 1×7 and 7×1, the last two convolution operations are equivalent to using a 7×7 convolution kernel, and the extracted features are fused and sent to the maximum pooling layer to obtain a 56×56 feature map; The 56×56 feature map is sent to the residual attention mechanism module Stage2. The Stage2 module belongs to the channel attention module, similar to the SENet network that constrains all feature values on each channel, so that the output length is the same as the number of channels. One-dimensional vector As privilege weighting; after that, connect 5 Inception Resnet V2-C modules, the convolution kernels used by this module are 1×1, 1×3 and 3×1 respectively, and the last two convolution operations are equivalent to using 3×3 The convolution kernel, the feature map is pooled to the maximum, and the feature map of 28×28 is obtained; the residual attention mechanism Stage3 module is connected, which belongs to the attention concentration module, and is introduced by using Sigmoid for each channel and each space Non-linear factors; finally, connect 5 InceptionResnet V2-C modules again to extract high-level features, and connect the maximum pooling layer to reduce the dimension of the feature map to obtain a 14×14 feature map. Since the residual attention mechanism is to focus on the target in a larger feature map, when the size of the feature map is small enough, there is no need to add a residual attention mechanism.
(2)解码部分(2) Decoding part
解码部分是从最大池化层后的上采样层开始到SoftMax层结束。解码部分主要是通过上采样层和卷积层对编码部分的信息进行逐层解码传输。14×14的特征图经过上采样层 (Upsample)后生成稀疏要素图,然后经过3×3的卷积操作产生28×28的密集特征映射图,以同样的方式进行四次上采样和卷积操作,最终将输出端的高维特征送到SoftMax进行每个元素的分类The decoding part starts from the upsampling layer after the maximum pooling layer to the end of the SoftMax layer. The decoding part mainly decodes and transmits the information of the coding part layer by layer through the upsampling layer and the convolutional layer. The 14×14 feature map passes through the upsampling layer (Upsample) to generate a sparse feature map, and then undergoes a 3×3 convolution operation to generate a 28×28 dense feature map, and performs four upsampling and convolution in the same way operation, and finally send the high-dimensional features of the output to SoftMax for the classification of each element
3.1 Inception Resnet V2模块3.1 Inception Resnet V2 module
在语义分割中,由于不同图像中的目标所占区域大小不同,所以存在位置信息的巨大差异。当信息分布均衡时,图像适合采用较大的卷积核;当信息分布较集中时,图像适合用较小的卷积核,因此为卷积操作选择合适的卷积核大小比较困难。若卷积神经网络只是简单的堆叠较大的卷积层来提取信息,那么网络极容易发生过拟合,梯度信息很难传递到整个网络,而且计算资源成本也随之升高。Inception Resnet V2模块在同一层级采用多个尺寸的卷积核,使网络宽度得到拓展可以提取多个尺度的特征信息。同时,为了减少信息损失,构建了三种不同类型的Inception Resnet V2模块,分别为Inception Resnet V2-A、Inception Resnet V2-B和Inception Resnet V2-C,并引入了残差网络的连接方式,实现了一种更高效的特征提取方法,Inception Resnet V2模块的结构如图3所示。In semantic segmentation, since objects occupy different sizes in different images, there is a huge difference in location information. When the information distribution is balanced, the image is suitable for a larger convolution kernel; when the information distribution is concentrated, the image is suitable for a smaller convolution kernel, so it is difficult to choose an appropriate convolution kernel size for the convolution operation. If the convolutional neural network simply stacks larger convolutional layers to extract information, then the network is extremely prone to overfitting, and the gradient information is difficult to transmit to the entire network, and the cost of computing resources will also increase. The Inception Resnet V2 module uses convolution kernels of multiple sizes at the same level to expand the network width and extract feature information of multiple scales. At the same time, in order to reduce information loss, three different types of Inception Resnet V2 modules were constructed, namely Inception Resnet V2-A, Inception Resnet V2-B and Inception Resnet V2-C, and the connection mode of the residual network was introduced to realize A more efficient feature extraction method is proposed. The structure of the Inception Resnet V2 module is shown in Figure 3.
3.2残差注意力机制模块3.2 Residual Attention Mechanism Module
残差注意力网络是一种通过堆叠注意力机制模块的卷积神经网络,该网络将端到端的训练方式与最新的前馈网络结构结合。随着网络层数的增加,不同注意力机制模块的注意力特征会自适应地改变,不仅可以选择聚焦位置,捕获图像中不同类型的注意力,还能增强该位置上对象的特征表示,从而提高网络性能。同时该网络采用了残差注意力学习方法,可以用来训练深层的卷积神经网络。每个残差注意力模块可以分为2个支路:掩膜分支和主干分支,其中主干分支采用了预激活残差模块和Inception模块作为网络的基本单元;在掩膜分支中,特征图的处理主要包括前向的下采样和上采样。下采样可快速进行编码并获取特征图的全局特征,上采样功能是将提取出来的全局高维特征与未下采样的特征组合,使高低维度的特征能够更好地结合,并输出维度一致的注意力特征图,然后用点乘操作将两个分支的特征图组合,得到最终的输出特征图,输出结果如下式所示:Residual attention network is a convolutional neural network with stacked attention mechanism modules, which combines end-to-end training with the latest feed-forward network structure. As the number of network layers increases, the attention features of different attention mechanism modules will change adaptively, not only can select the focus position, capture different types of attention in the image, but also enhance the feature representation of the object at this position, thus Improve network performance. At the same time, the network uses a residual attention learning method, which can be used to train a deep convolutional neural network. Each residual attention module can be divided into two branches: the mask branch and the backbone branch, where the backbone branch uses the pre-activation residual module and the Inception module as the basic unit of the network; in the mask branch, the feature map Processing mainly includes forward downsampling and upsampling. Downsampling can quickly encode and obtain the global features of the feature map. The upsampling function is to combine the extracted global high-dimensional features with non-downsampled features, so that high- and low-dimensional features can be better combined, and the output dimension is consistent. Attention to the feature map, and then use the dot multiplication operation to combine the feature maps of the two branches to obtain the final output feature map. The output result is shown in the following formula:
Hi,c(x)=(1+Mi,c(x))×Fi,c(x)H i,c (x)=(1+M i,c (x))×F i,c (x)
其中Mi,c(x)表示掩膜分支的特征输出图,Fi,c(x)是输出与输入之间的残差结果,是由深层的卷积神经网络来学习拟合的,i表示空间位置的范围,c表示通道索引。Among them, M i, c (x) represents the feature output map of the mask branch, F i, c (x) is the residual result between the output and the input, which is learned and fitted by a deep convolutional neural network, i Indicates the range of spatial locations, and c indicates the channel index.
残差注意力机制模块分为三种类型,包括空间注意力、通道注意力、注意力集中,其残差注意力机制模块的结构如图4所示。The residual attention mechanism module is divided into three types, including spatial attention, channel attention, and attention concentration. The structure of the residual attention mechanism module is shown in Figure 4.
4.动脉反光带有效区域筛选4. Effective area screening of arterial reflective tape
为了减少后续动脉硬化检测的工作量,需要对分割结果进行动脉反光带分割有效区域筛选。动脉反光带分割有效区域的筛选过程示例如图5所示,其中图5(a)为从分割结果中提取出的动脉反光带;图5(b)为动脉反光带二值化的结果;图5(c)为利用Canny算子查找动脉反光带连通域的结果;最后筛选得到动脉反光带的最大连通域,得到后续待检测的动脉血管及动脉反光带,如图5(d)所示。In order to reduce the workload of subsequent arteriosclerosis detection, it is necessary to screen the effective area of arterial reflective tape segmentation for the segmentation results. An example of the screening process of the effective region of arterial reflective tape segmentation is shown in Figure 5, where Figure 5(a) is the arterial reflective tape extracted from the segmentation result; Figure 5(b) is the result of binarization of the arterial reflective tape; 5(c) is the result of using the Canny operator to find the connected domain of the arterial reflective band; finally, the maximum connected domain of the arterial reflective band is obtained by screening, and the subsequent arterial vessels and arterial reflective bands to be detected are obtained, as shown in Figure 5(d).
5.动脉血管及动脉反光带的高斯拟合5. Gaussian fitting of arterial vessels and arterial reflective tape
动脉血管及动脉反光带像素点的灰度分布示例如图6所示,图中上方曲线为动脉血管径向灰度分布曲线,下方曲线为灰度分布曲线的一阶导数,灰度分布曲线极小值点C、D处为动脉反光带边界,M为灰度分布曲线的最高点。由于动脉反光带处于动脉血管中心,且其亮度高于血管,所以动脉血管径向灰度分布符合高斯函数分布规律,可通过计算灰度分布曲线的一阶导数确定极小值C、D的坐标。因此,本发明提出使用四段高斯模型描述动脉血管及动脉反光带横截面的灰度分布,该模型不受限于动脉血管及动脉反光带灰度分布的不对称性,并且能够方便精确地找到动脉反光带边界。An example of the gray distribution of arterial vessels and arterial reflective tape pixels is shown in Figure 6. The upper curve in the figure is the radial gray distribution curve of arterial vessels, and the lower curve is the first-order derivative of the gray distribution curve. The gray distribution curve is extremely Small value points C and D are the boundaries of the arterial reflective belt, and M is the highest point of the gray distribution curve. Since the arterial reflective tape is located in the center of the arterial vessel, and its brightness is higher than that of the blood vessel, the radial gray distribution of the arterial vessel conforms to the distribution law of the Gaussian function, and the coordinates of the minimum values C and D can be determined by calculating the first derivative of the gray distribution curve . Therefore, the present invention proposes to use a four-segment Gaussian model to describe the gray distribution of arterial blood vessels and arterial reflective tape cross-sections. Arterial reflective tape border.
本发明采用的四段高斯函数表达式为:Four sections of Gaussian function expressions that the present invention adopts are:
其中a1、a2、a3、a4是高斯曲线峰值,b1、b2、b3、b4是峰值坐标,c1、c2、c3、c4为标准方差。Among them, a 1 , a 2 , a 3 , and a 4 are the peak values of the Gaussian curve, b 1 , b 2 , b 3 , and b 4 are the peak coordinates, and c 1 , c 2 , c 3 , and c 4 are the standard deviations.
为准确描述动脉血管径向的灰度分布,本发明在待测的动脉血管上进行三次采样,并分别进行高斯拟合,动脉血管及动脉反光带的高斯拟合示例如图7所示。图7(a)为动脉血管的采样,本发明通过确定一条垂直于血管的采样直线,对采样像素点进行四段高斯拟合;图7(b)为利用四段高斯函数对采样像素点进行灰度值拟合的结果;图7(c)为该灰度值高斯拟合结果的一阶导数,从图中可计算得到极小值点C、D的坐标。对图7(b)灰度分布拟合结果中每个区域的定义如表1所示:A、B点为动脉血管边界,A-和B+为视网膜背景,C和D是灰度分布极小值点即动脉反光带边界,CM和MD为动脉反光带区域,M点为CD段的峰值。In order to accurately describe the radial gray distribution of arteries, the present invention performs three samplings on the arteries to be tested, and performs Gaussian fitting respectively. An example of Gaussian fitting of arteries and arterial reflective tapes is shown in FIG. 7 . Fig. 7 (a) is the sampling of arterial blood vessel, and the present invention carries out four sections of Gaussian fittings to sampling pixel points by determining a sampling straight line perpendicular to blood vessels; Fig. 7 (b) uses four sections of Gaussian functions to sample pixel points The result of gray value fitting; Figure 7(c) is the first derivative of the gray value Gaussian fitting result, and the coordinates of the minimum point C and D can be calculated from the figure. The definition of each region in the fitting result of the gray distribution in Figure 7(b) is shown in Table 1: points A and B are arterial borders, A - and B + are retinal backgrounds, C and D are gray distribution poles The small point is the border of the arterial reflective band, CM and MD are the area of the arterial reflective band, and point M is the peak of the CD segment.
表1灰度分布曲线示例中对动脉血管及动脉反光带区域的定义Definition of arterial vessels and arterial reflective tape areas in the example of gray scale distribution curve in Table 1
6.视网膜动脉硬化检测6. Detection of retinal arteriosclerosis
动脉血管及动脉反光带的反光参数包括带宽比BR和灰度比GR。根据动脉血管及动脉反光带的拟合结果得到动脉血管及动脉反光带边界的坐标,然后分别求解三条直线采样点上动脉血管与动脉反光带的宽度与平均灰度,最后计算灰度比和带宽比。当发生视网膜动脉硬化时,动脉反光带变宽并产生反射亢进,导致该动脉血管及动脉反光带的反光参数增大,因此选取三组数据中反光参数最大的一组值作为该患者动脉硬化检测的反光参数。反光参数的计算公式如下:The reflective parameters of arteries and arterial reflective tapes include bandwidth ratio BR and gray scale ratio GR. According to the fitting results of arterial vessels and arterial reflective tapes, the coordinates of arterial vessels and arterial reflective tape boundaries are obtained, and then the width and average gray level of arterial blood vessels and arterial reflective tapes on the three straight line sampling points are respectively calculated, and finally the gray ratio and bandwidth are calculated. Compare. When retinal arteriosclerosis occurs, the arterial reflective band becomes wider and produces hyperreflection, resulting in an increase in the reflective parameters of the arterial vessel and arterial reflective band. Therefore, the set of values with the largest reflective parameter among the three sets of data is selected as the patient's arteriosclerosis detection. reflective parameters. The calculation formula of reflective parameters is as follows:
带宽比: Bandwidth ratio:
灰度比: Gray ratio:
其中XA、XB、XC和XD是血管及动脉反光带边界坐标,Yi为各点对应的灰度值,n1、n2分别是动脉反光带和血管的像素点个数,XD-XC为动脉反光带宽度,XB-XA为动脉血管宽度,为动脉反光带的平均灰度值,为动脉血管的平均灰度值。Among them, X A , X B , X C and X D are the boundary coordinates of blood vessels and arterial reflective tapes, Y i is the gray value corresponding to each point, n 1 and n 2 are the number of pixels of arterial reflective tapes and blood vessels respectively, X D -X C is the width of arterial reflective tape, X B -X A is the width of arteries, is the average gray value of the arterial reflective tape, is the average gray value of arterial vessels.
本发明以医院提供的53幅眼底图像为测试图像,其中包括正常眼底图像16幅,疑似患有视网膜动脉硬化的眼底图像14,患有动脉硬化的眼底图像23幅。采用本发明对上述眼底图像进行视网膜动脉硬化检测。用待测眼底图像的三组反光参数与检测阈值对比,存在三种情况:(1)当三组反光参数均小于阈值时,则判断该眼底不存在视网膜动脉硬化;(2)当三组反光参数中有一组参数大于阈值,则判断该眼底存在视网膜动脉硬化;(3)当出现其他情况时,则判断该眼底为疑似视网膜动脉硬化。其中正常眼底图像的检测准确率为93.7%,对疑似患有视网膜动脉硬化的眼底图像检测准确率为92.8%,对患有视网膜动脉硬化的眼底图像的检测准确率为91.3%,平均检测准确率为92.6%。综上所述,本发明确定了视网膜动脉硬化检测阈值,并提出一种基于改进型编解码网络的眼底图像视网膜动脉硬化检测方法,能准确分割视网膜动静脉血管及动脉反光带,解决了传统方法无法准确分割眼底动静脉血管及动脉反光带的问题,且减少了光线及其他眼底组织对动脉反光带分割带来的干扰,提高了特征和梯度信息的传输效率,实现了高准确率的分割,完成了视网膜动脉硬化的检测。The present invention uses 53 fundus images provided by the hospital as test images, including 16 fundus images of normal fundus, 14 fundus images of suspected retinal arteriosclerosis, and 23 fundus images of arteriosclerosis. The present invention is used to detect retinal arteriosclerosis on the above-mentioned fundus image. By comparing the three sets of reflective parameters of the fundus image to be tested with the detection threshold, there are three situations: (1) when the three sets of reflective parameters are all less than the threshold, it is judged that there is no retinal arteriosclerosis in the fundus; (2) when the three sets of reflective parameters If one group of parameters among the parameters is greater than the threshold, it is judged that there is retinal arteriosclerosis in the fundus; (3) when other conditions occur, it is judged that the fundus is suspected of retinal arteriosclerosis. Among them, the detection accuracy rate of normal fundus images was 93.7%, the detection accuracy rate of fundus images suspected of having retinal arteriosclerosis was 92.8%, and the detection accuracy rate of fundus images suffering from retinal arteriosclerosis was 91.3%. was 92.6%. In summary, the present invention determines the detection threshold of retinal arteriosclerosis, and proposes a method for detecting retinal arteriosclerosis in fundus images based on an improved codec network, which can accurately segment retinal arteriovenous vessels and arterial reflective bands, and solves the problem of traditional methods. The problem that fundus arteriovenous vessels and arterial reflective bands cannot be accurately segmented, and the interference caused by light and other fundus tissues on the arterial reflective band segmentation is reduced, the transmission efficiency of feature and gradient information is improved, and high-accuracy segmentation is achieved. The detection of retinal arteriosclerosis was completed.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围,应当理解,本发明并不限于这里所描述的实现方案,这些实现方案描述的目的在于帮助本领域中的技术人员实践本发明。任何本领域中的技术人员很容易在不脱离本发明精神和范围的情况下进行进一步的改进和完善,因此本发明只受到本发明权利要求的内容和范围的限制,其意图涵盖所有包括在由所附权利要求所限定的本发明精神和范围内的备选方案和等同方案。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the protection scope of the present invention. It should be understood that the present invention is not limited to the implementation solutions described here. The purpose of these implementation solutions descriptions is to help those skilled in the art Those skilled in the art practice the present invention. Any person skilled in the art can easily carry out further improvement and perfection without departing from the spirit and scope of the present invention, so the present invention is only limited by the content and scope of the claims of the present invention, and it is intended to cover all Alternatives and equivalents within the spirit and scope of the invention as defined by the appended claims.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910966530.4A CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910966530.4A CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110751636A CN110751636A (en) | 2020-02-04 |
CN110751636B true CN110751636B (en) | 2023-04-21 |
Family
ID=69278073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910966530.4A Active CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751636B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415342B (en) * | 2020-03-18 | 2023-12-26 | 北京工业大学 | Automatic detection method for pulmonary nodule images of three-dimensional convolutional neural network by fusing attention mechanisms |
CN111415361B (en) * | 2020-03-31 | 2021-01-19 | 浙江大学 | Fetal brain age estimation and abnormal detection method and device based on deep learning |
CN111932554B (en) * | 2020-07-31 | 2024-03-22 | 青岛海信医疗设备股份有限公司 | Lung vessel segmentation method, equipment and storage medium |
CN112347977B (en) * | 2020-11-23 | 2021-07-20 | 深圳大学 | Automatic detection method, storage medium and device for induced pluripotent stem cells |
CN112927236B (en) * | 2021-03-01 | 2021-10-15 | 南京理工大学 | A clothing parsing method and system based on channel attention and self-supervised constraints |
CN113192074B (en) * | 2021-04-07 | 2024-04-05 | 西安交通大学 | Automatic arteriovenous segmentation method suitable for OCTA image |
CN113269783A (en) * | 2021-04-30 | 2021-08-17 | 北京小白世纪网络科技有限公司 | Pulmonary nodule segmentation method and device based on three-dimensional attention mechanism |
CN114170422A (en) * | 2021-10-26 | 2022-03-11 | 煤炭科学研究总院 | A Semantic Segmentation Method of Underground Image in Coal Mine |
CN114581366B (en) * | 2021-12-30 | 2025-04-04 | 山东师范大学 | Retinal vein occlusion colorful image classification system based on attention mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657612A (en) * | 2017-10-16 | 2018-02-02 | 西安交通大学 | Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Fundus image judgment method and device |
CN110276356A (en) * | 2019-06-18 | 2019-09-24 | 南京邮电大学 | Recognition method of fundus image microaneurysm based on R-CNN |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019013779A1 (en) * | 2017-07-12 | 2019-01-17 | Mohammed Alauddin Bhuiyan | Automated blood vessel feature detection and quantification for retinal image grading and disease screening |
-
2019
- 2019-10-12 CN CN201910966530.4A patent/CN110751636B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657612A (en) * | 2017-10-16 | 2018-02-02 | 西安交通大学 | Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Fundus image judgment method and device |
CN110276356A (en) * | 2019-06-18 | 2019-09-24 | 南京邮电大学 | Recognition method of fundus image microaneurysm based on R-CNN |
Non-Patent Citations (2)
Title |
---|
霍妍佼 ; 杨丽红 ; 崔蕊 ; 魏文斌.共焦激光扫描检眼镜与眼底彩色照相对视网膜和脉络膜病变的检出比较.中华眼底病杂志.2016,第32卷(第3期),全文. * |
黄文博.彩色眼底视网膜图像中相关目标检测方法研究.《CNKI》.2018,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN110751636A (en) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110751636B (en) | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network | |
CN111340789B (en) | Fundus retina blood vessel identification and quantification method, device, equipment and storage medium | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
Luo et al. | Micro-vessel image segmentation based on the AD-UNet model | |
CN108446729A (en) | Egg embryo classification method based on convolutional neural networks | |
CN109389584A (en) | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN | |
CN114287878A (en) | Diabetic retinopathy focus image identification method based on attention model | |
CN116236150A (en) | Arteriovenous blood vessel image segmentation method based on fundus image | |
Rajathi et al. | Varicose ulcer (C6) wound image tissue classification using multidimensional convolutional neural networks | |
CN105513077A (en) | System for screening diabetic retinopathy | |
Bai et al. | Automatic segmentation of cervical region in colposcopic images using K-means | |
CN110310289A (en) | Lung tissue image segmentation method based on deep learning | |
CN110120056A (en) | Blood leucocyte dividing method based on self-adapting histogram threshold value and contour detecting | |
CN113420826A (en) | Liver focus image processing system and image processing method | |
CN117078697B (en) | Fundus disease seed detection method based on cascade model fusion | |
CN111161287A (en) | Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning | |
CN106023151A (en) | Traditional Chinese medicine tongue manifestation object detection method in open environment | |
CN106780439A (en) | A method for screening fundus images | |
Sun et al. | Automatic detection of retinal regions using fully convolutional networks for diagnosis of abnormal maculae in optical coherence tomography images | |
CN111833321A (en) | A window-adjustment optimized and enhanced detection model for intracranial hemorrhage and its construction method | |
CN110610480A (en) | MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism | |
CN116630237A (en) | Image quality detection method and related device, electronic equipment and storage medium | |
CN110236497A (en) | A Fatty Liver Prediction Method Based on Tongue Phase and BMI Index | |
Liu et al. | HSIL colposcopy image segmentation using improved U-Net | |
CN111798426A (en) | Deep learning detection system for mitoses in gastrointestinal stromal tumors for mobile |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |