[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117522719B - Bronchoscope image auxiliary optimization system based on machine learning - Google Patents

Bronchoscope image auxiliary optimization system based on machine learning Download PDF

Info

Publication number
CN117522719B
CN117522719B CN202410017385.6A CN202410017385A CN117522719B CN 117522719 B CN117522719 B CN 117522719B CN 202410017385 A CN202410017385 A CN 202410017385A CN 117522719 B CN117522719 B CN 117522719B
Authority
CN
China
Prior art keywords
pixel
pixels
window
image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410017385.6A
Other languages
Chinese (zh)
Other versions
CN117522719A (en
Inventor
宗政
吴菊
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zigong First Peoples Hospital
Original Assignee
Zigong First Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zigong First Peoples Hospital filed Critical Zigong First Peoples Hospital
Priority to CN202410017385.6A priority Critical patent/CN117522719B/en
Publication of CN117522719A publication Critical patent/CN117522719A/en
Application granted granted Critical
Publication of CN117522719B publication Critical patent/CN117522719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30021Catheter; Guide wire
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a bronchoscope image auxiliary optimization system based on machine learning, which comprises the following steps: the image acquisition module is used for acquiring a bronchus image; the reference region acquisition module is used for acquiring a reference region of each pixel in the bronchial image according to the bronchial image; the correction confusion degree acquisition module is used for obtaining the initial confusion degree of each pixel according to the reference area of each pixel and obtaining the correction confusion degree of each pixel according to the initial confusion degree; the image enhancement module is used for obtaining corrected clustering distances of every two pixels according to the corrected confusion degree and obtaining a plurality of independent texture areas according to the corrected clustering distances; an enhanced bronchial image is obtained from each independent texture region. Thereby realizing the enhancement of focus areas in the bronchial image by analyzing the condition that each area in the bronchial image accords with focus characteristics.

Description

基于机器学习的支气管镜图像辅助优化系统Bronchoscope image-assisted optimization system based on machine learning

技术领域Technical field

本发明涉及图像处理技术领域,具体涉及基于机器学习的支气管镜图像辅助优化系统。The present invention relates to the field of image processing technology, and in particular to a bronchoscope image-assisted optimization system based on machine learning.

背景技术Background technique

支气管镜可以直接观察支气管和部分肺部的内部情况,为疾病或异常提供了第一手的视觉证据。由于支气管为人体内部的结构,而自然光线无法穿透到人体内部,因而支气管镜中会有补光灯来给支气管的局部区域打光,然后支气管镜中的内置相机采集支气管图像。Bronchoscopy allows direct observation of the inside of the bronchi and parts of the lungs, providing the first visual evidence of disease or abnormalities. Since the bronchus is an internal structure of the human body, and natural light cannot penetrate into the human body, there is a fill light in the bronchoscope to illuminate local areas of the bronchus, and then the built-in camera in the bronchoscope collects images of the bronchus.

由于支气管图像是在补光灯环境下采集的,因而支气管图像中会存在高亮区域,而高亮区域的灰度值一般较大。同时支气管中的炎症病灶一般为白色,因而炎症病灶区域的灰度值也较大。因而支气管图像中的光线会干扰病灶区域的提取。因而为了排除光线干扰,需对支气管图像进行增强处理。Since bronchial images are collected under a fill light environment, there will be highlighted areas in the bronchial images, and the grayscale values of the highlighted areas are generally larger. At the same time, inflammatory lesions in the bronchus are generally white, so the gray value of the inflammatory lesion area is also larger. Therefore, the light in the bronchial image will interfere with the extraction of the lesion area. Therefore, in order to eliminate light interference, the bronchial image needs to be enhanced.

而传统增强方法会把较小灰度值压缩的更小,把较大的灰度值拉伸的更大。因而传统增强方法无法将病灶区域的灰度值与光线产生的高亮区域的灰度值区分开。因而需要设计一种增强方法将病灶区域与其他区域的灰度值区分开,从而便于病灶区域提取。The traditional enhancement method will compress smaller grayscale values smaller and stretch larger grayscale values larger. Therefore, traditional enhancement methods cannot distinguish the gray value of the lesion area from the gray value of the highlight area generated by light. Therefore, it is necessary to design an enhancement method to distinguish the gray value of the lesion area from other areas, so as to facilitate the extraction of the lesion area.

发明内容Contents of the invention

本发明提供基于机器学习的支气管镜图像辅助优化系统,以解决现有的问题:如何通过图像增强将病灶区域与其他区域的灰度值区分开。The present invention provides a bronchoscope image-assisted optimization system based on machine learning to solve the existing problem: how to distinguish the gray value of the lesion area from other areas through image enhancement.

本发明的基于机器学习的支气管镜图像辅助优化系统采用如下技术方案:The machine learning-based bronchoscope image-assisted optimization system of the present invention adopts the following technical solutions:

本发明一个实施例提供了基于机器学习的支气管镜图像辅助优化系统,该系统包括以下模块:One embodiment of the present invention provides a machine learning-based bronchoscope image-assisted optimization system, which includes the following modules:

图像采集模块,用于采集支气管图像;Image acquisition module, used to acquire bronchial images;

参考区域获取模块,用于根据支气管图像中像素的灰度值差异得到支气管图像中每个像素的参考区域;The reference area acquisition module is used to obtain the reference area of each pixel in the bronchial image based on the difference in gray value of the pixels in the bronchial image;

修正混乱程度获取模块,用于根据每个像素的参考区域的面积和边缘的复杂性得到每个像素的初始混乱程度,根据每个像素的初始混乱程度和参考区域中每个像素的梯度得到每个像素的修正混乱程度;The modified confusion degree acquisition module is used to obtain the initial confusion degree of each pixel based on the area and edge complexity of the reference area of each pixel, and obtain the initial confusion degree of each pixel based on the initial confusion degree of each pixel and the gradient of each pixel in the reference area. The degree of correction confusion per pixel;

图像增强模块,用于根据每个像素的修正混乱程度得到每两个像素的修正聚类距离,根据每两个像素的修正聚类距离对支气管图像中像素进行聚类分析得到多个独立纹理区域;根据每个独立纹理区域中每个像素的修正混乱程度对支气管图像进行增强处理得到增强后支气管图像。The image enhancement module is used to obtain the corrected clustering distance of every two pixels based on the corrected degree of confusion of each pixel, and perform cluster analysis on the pixels in the bronchial image to obtain multiple independent texture areas based on the corrected clustering distance of every two pixels. ; The bronchial image is enhanced according to the corrected confusion degree of each pixel in each independent texture area to obtain the enhanced bronchial image.

优选的,所述根据支气管图像中像素的灰度值差异得到支气管图像中每个像素的参考区域,包括的具体方法为:Preferably, the reference area of each pixel in the bronchial image is obtained based on the difference in gray value of the pixels in the bronchial image, and the specific method includes:

预设一个窗口边长,以每个像素为中心,获取L*L的窗口,记为每个像素的基准窗口;获取每个像素的基准窗口中所有像素的灰度值的方差,记为每个像素的基准窗口的灰度方差,将支气管图像中所有像素的灰度方差的均值记为基准方差;Default window side length , with each pixel as the center, obtain the L*L window, recorded as the reference window of each pixel; obtain the variance of the gray value of all pixels in the reference window of each pixel, recorded as the reference window of each pixel Gray-level variance, the mean value of the gray-level variance of all pixels in the bronchial image is recorded as the reference variance;

对于任意一个像素,将每个像素的基准窗口记为第一扩展窗口,将第一扩展窗口的灰度方差与基准方差的差值记为第一扩展窗口的偏差;根据第一扩展窗口的偏差得到第二扩展窗口,获取第二扩展窗口的灰度方差,将第二扩展窗口的灰度方差与基准方差的差值记为第二扩展窗口的偏差,根据第二扩展窗口的偏差得到第三扩展窗口;以此类推,直至扩展窗口中像素个数大于等于预设最大临界值B、小于等于预设最小临界值A或者扩展窗口的偏差等于0时结束,得到若干个扩展窗口;For any pixel, the reference window of each pixel is recorded as the first extended window, and the difference between the grayscale variance of the first extended window and the reference variance is recorded as the deviation of the first extended window; according to the deviation of the first extended window Obtain the second extended window, obtain the grayscale variance of the second extended window, record the difference between the grayscale variance of the second extended window and the reference variance as the deviation of the second extended window, and obtain the third extended window based on the deviation of the second extended window. Expand the window; and so on, until the number of pixels in the expansion window is greater than or equal to the preset maximum critical value B, less than or equal to the preset minimum critical value A, or the deviation of the expansion window is equal to 0, and several expansion windows are obtained;

在每个像素点的所有扩展窗口中选取偏差最小的扩展窗口作为每个像素的参考区域。Among all the extended windows of each pixel, the extended window with the smallest deviation is selected as the reference area of each pixel.

优选的,所述根据第一扩展窗口的偏差得到第二扩展窗口,包括的具体方法为:Preferably, the second expansion window is obtained according to the deviation of the first expansion window, including the specific method:

当第一扩展窗口的偏差大于0时,获取第一扩展窗口中最外面一圈的像素记为第一扩展窗口的外围像素,在第一扩展窗口的外围像素中任意选择一个像素去除得到第二扩展窗口;When the deviation of the first extended window is greater than 0, the outermost circle of pixels in the first extended window is obtained and recorded as the peripheral pixels of the first extended window. Any pixel is selected and removed from the peripheral pixels of the first extended window to obtain the second extended window;

当第一扩展窗口的偏差小于0时,获取第一扩展窗口的外围像素,获取与第一扩展窗口的外围像素相邻的,并且不属于第一扩展窗口的像素,记为第一扩展窗口的外邻接像素,任选一个第一扩展窗口的外邻接像素添加在第一扩展窗口上,得到第二扩展窗口。When the deviation of the first expansion window is less than 0, obtain the peripheral pixels of the first expansion window, obtain the pixels adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window, and record them as the first expansion window. For outer adjacent pixels, any outer adjacent pixel of the first extended window is added to the first extended window to obtain a second extended window.

优选的,所述根据每个像素的参考区域的面积和边缘的复杂性得到每个像素的初始混乱程度,包括的具体方法为:Preferably, the initial degree of confusion of each pixel is obtained based on the area of the reference area of each pixel and the complexity of the edges, including the specific method:

获取每个像素的参考区域的边缘复杂性;Get the edge complexity of the reference area for each pixel;

根据每个像素的参考区域的边缘复杂性和面积得到每个像素的初始混乱程度的计算方法为:The calculation method to obtain the initial confusion level of each pixel based on the edge complexity and area of the reference area of each pixel is:

其中,表示第i个像素的参考区域的面积,/>表示线性归一化处理,/>表示第i个像素的参考区域的边缘复杂性,/>表示第i个像素的初始混乱程度。in, Represents the area of the reference area of the i-th pixel, /> Represents linear normalization processing,/> Represents the edge complexity of the reference area of the i-th pixel, /> Represents the initial confusion level of the i-th pixel.

优选的,所述获取每个像素的参考区域的边缘复杂性,包括的具体方法为:Preferably, the specific method of obtaining the edge complexity of the reference area of each pixel includes:

获取每个像素的参考区域的外围像素,对每个像素的参考区域的所有外围像素进行链码分析,得到每个像素的参考区域的链码序列;将每个像素的参考区域的链码序列中所有数据的方差,记为每个像素的参考区域的边缘复杂性。Obtain the peripheral pixels of the reference area of each pixel, perform chain code analysis on all peripheral pixels of the reference area of each pixel, and obtain the chain code sequence of the reference area of each pixel; convert the chain code sequence of the reference area of each pixel The variance of all data in , denoted as the edge complexity of the reference region for each pixel.

优选的,所述根据每个像素的初始混乱程度和参考区域中每个像素的梯度得到每个像素的修正混乱程度,包括的具体方法为:Preferably, the method of obtaining the corrected degree of confusion of each pixel based on the initial degree of confusion of each pixel and the gradient of each pixel in the reference area includes the following specific methods:

根据每个像素的参考区域中每个像素的梯度获取每个像素的参考像素的梯度偏心角;Obtain the gradient eccentricity angle of the reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel;

根据每个像素的参考像素的梯度偏心角和初始混乱程度得到每个像素的修正混乱程度的计算方法为:The calculation method for obtaining the corrected confusion degree of each pixel based on the gradient eccentricity angle of the reference pixel and the initial confusion degree of each pixel is:

其中,表示第i个像素与参考区域的第q个像素的欧氏距离,/>表示第i个像素的第q个参考像素的梯度偏心角,/>表示第i个像素的参考区域中像素的数量,/>表示第i个像素的初始混乱程度,/>表示第i个像素的修正混乱程度。in, Represents the Euclidean distance between the i-th pixel and the q-th pixel of the reference area, /> Represents the gradient eccentricity angle of the q-th reference pixel of the i-th pixel, /> Represents the number of pixels in the reference area of the i-th pixel, /> Represents the initial confusion level of the i-th pixel,/> Indicates the degree of correction confusion of the i-th pixel.

优选的,所述根据每个像素的参考区域中每个像素的梯度获取每个像素的参考像素的梯度偏心角,包括的具体方法为:Preferably, the specific method of obtaining the gradient eccentricity angle of the reference pixel of each pixel based on the gradient of each pixel in the reference area of each pixel is:

将每个像素的参考区域内的像素记为每个像素的参考像素,将每个像素的任意一个参考像素记为每个像素的目标参考像素;获取每个像素与目标参考像素构成的向量,与目标参考像素的梯度方向的夹角,记为每个像素与目标参考像素的梯度偏心角;Record the pixels in the reference area of each pixel as the reference pixel of each pixel, record any reference pixel of each pixel as the target reference pixel of each pixel; obtain the vector composed of each pixel and the target reference pixel, The angle with the gradient direction of the target reference pixel is recorded as the gradient eccentricity angle between each pixel and the target reference pixel;

获取每个像素的参考像素的梯度偏心角。Get the gradient eccentricity angle of the reference pixel for each pixel.

优选的,所述根据每个像素的修正混乱程度得到每两个像素的修正聚类距离,包括的具体方法为:Preferably, the corrected clustering distance of every two pixels is obtained according to the corrected confusion degree of each pixel, including the specific method:

其中,表示第i个像素的修正混乱程度,/>表示第j个像素的修正混乱程度,/>表示第i个像素与第j个像素之间的欧氏距离,/>表示第i个像素与第j个像素之间的修正聚类距离。in, Indicates the degree of correction confusion of the i-th pixel,/> Indicates the degree of correction confusion of the j-th pixel,/> Represents the Euclidean distance between the i-th pixel and the j-th pixel,/> Represents the modified clustering distance between the i-th pixel and the j-th pixel.

优选的,所述根据每两个像素的修正聚类距离对支气管图像中像素进行聚类分析得到多个独立纹理区域,包括的具体方法为:Preferably, the cluster analysis of pixels in the bronchial image based on the modified clustering distance of every two pixels is performed to obtain multiple independent texture areas, including the specific method:

设置像素的聚类参数,基于聚类参数以及像素间的修正聚类距离,利用ISODATA算法对像素进行聚类处理得到若干独立纹理区域。Set the clustering parameters of the pixels. Based on the clustering parameters and the modified clustering distance between pixels, use the ISODATA algorithm to cluster the pixels to obtain several independent texture areas.

优选的,所述根据每个独立纹理区域中每个像素的修正混乱程度对支气管图像进行增强处理得到增强后支气管图像,包括的具体方法为:Preferably, the bronchial image is enhanced according to the correction degree of confusion of each pixel in each independent texture area to obtain an enhanced bronchial image, which includes the following specific methods:

获取每个独立纹理区域的所有像素的修正混乱程度的均值,记为每个独立纹理区域的修正混乱程度;将支气管图像中每个独立纹理区域的每个像素的灰度值乘以独立纹理区域的修正混乱程度实现图像增强得到增强后支气管图像。Obtain the mean value of the corrected confusion degree of all pixels in each independent texture area, and record it as the corrected confusion degree of each independent texture area; multiply the grayscale value of each pixel in each independent texture area in the bronchial image by the independent texture area Image enhancement is achieved by correcting the degree of confusion to obtain enhanced bronchial images.

本发明的技术方案的有益效果是:The beneficial effects of the technical solution of the present invention are:

获取支气管图像,根据灰度值差异获取每个像素的参考区域,由于支气管图像中病灶区域的灰度差异较大,同时病灶区域的边缘的不规整,因而根据每个像素的参考区域的边缘复杂性和参考区域的面积得到每个像素的初始混乱程度,通过初始混乱程度可以对病灶的区分特征进行描述,由于通过每个像素的初始混乱程度无法将高亮区域和病灶区域进行区分,因而通过分析每个像素的参考区域的灰度值变化符合病灶区域的情况,对初始混乱程度进行修正得到每个像素的修正混乱程度,根据每个像素的修正混乱程度对支气管图像中像素进行聚类和增强处理得到增强后支气管图像,增强后支气管图像能够将病灶区域进行突出显示,便于后续的病灶分析。Obtain the bronchial image and obtain the reference area of each pixel based on the gray value difference. Since the gray level difference of the lesion area in the bronchial image is large and the edge of the lesion area is irregular, the edge of the reference area based on each pixel is complex. The initial confusion level of each pixel can be obtained by using the initial confusion level of each pixel and the area of the reference area. The distinguishing characteristics of the lesion can be described by the initial confusion level. Since the highlight area and the lesion area cannot be distinguished by the initial confusion level of each pixel, the initial confusion level of each pixel can be used to distinguish the highlight area and the lesion area. Analyze the gray value change of the reference area of each pixel to match the situation of the lesion area, correct the initial degree of confusion to obtain the corrected degree of confusion of each pixel, and cluster the pixels in the bronchial image according to the corrected degree of confusion of each pixel. The enhanced bronchial image is obtained through the enhancement processing, and the enhanced bronchial image can highlight the lesion area to facilitate subsequent lesion analysis.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts.

图1为本发明基于机器学习的支气管镜图像辅助优化系统的结构框图;Figure 1 is a structural block diagram of the bronchoscope image-assisted optimization system based on machine learning of the present invention;

图2为本发明提供的包含炎症病灶的支气管图像。Figure 2 is an image of a bronchus containing inflammatory lesions provided by the present invention.

具体实施方式Detailed ways

为了更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本发明提出的基于机器学习的支气管镜图像辅助优化系统,其具体实施方式、结构、特征及其功效,详细说明如下。在下述说明中,不同的“一个实施例”或“另一个实施例”指的不一定是同一实施例。此外,一或多个实施例中的特定特征、结构或特点可由任何合适形式组合。In order to further elaborate on the technical means and effects adopted by the present invention to achieve the intended purpose of the invention, the following is a detailed description of the machine learning-based bronchoscope image auxiliary optimization system proposed by the present invention in conjunction with the drawings and preferred embodiments, and its specific implementation methods. , structure, characteristics and functions are described in detail below. In the following description, different terms "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the specific features, structures, or characteristics of one or more embodiments may be combined in any suitable combination.

除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which the invention belongs.

下面结合附图具体的说明本发明所提供的基于机器学习的支气管镜图像辅助优化系统的具体方案。The specific scheme of the machine learning-based bronchoscope image assisted optimization system provided by the present invention will be described in detail below with reference to the accompanying drawings.

请参阅图1,其示出了本发明一个实施例提供的基于机器学习的支气管镜图像辅助优化系统的结构框图,该系统包括以下模块:Please refer to Figure 1, which shows a structural block diagram of a machine learning-based bronchoscope image-assisted optimization system provided by one embodiment of the present invention. The system includes the following modules:

图像采集模块101,用于采集支气管图像。The image acquisition module 101 is used to acquire bronchial images.

为了实现本实施例提出的基于机器学习的支气管镜图像辅助优化系统,首先需要采集支气管图像。In order to implement the machine learning-based bronchoscope image-assisted optimization system proposed in this embodiment, bronchial images need to be collected first.

采集支气管图像的具体过程为:利用支气管镜采集支气管图像,图2展示了一张支气管图像,支气管图像的黑色矩形框中展示了炎症浸润型病灶,黑色圆圈中展示了光线产生的高亮点,将支气管图像进行灰度化处理得到支气管图像的灰度图像。为了便于描述,后续将支气管图像的灰度图像依旧称为支气管图像。The specific process of collecting bronchial images is: use a bronchoscope to collect bronchial images. Figure 2 shows a bronchial image. The black rectangular frame of the bronchial image shows the inflammatory infiltration lesions, and the black circle shows the highlights generated by the light. The bronchial image is grayscaled to obtain a grayscale image of the bronchial image. For the convenience of description, the grayscale image of the bronchial image will still be called the bronchial image.

参考区域获取模块102,用于根据支气管图像得到每个像素的参考区域。The reference area acquisition module 102 is used to obtain the reference area of each pixel according to the bronchial image.

需要说明的是,由于支气管图像中病灶区域与非病灶区域存在灰度差异,因而可以先根据像素的灰度值进行区域划分。It should be noted that since there is a grayscale difference between the lesion area and the non-lesional area in the bronchial image, the regions can be divided first according to the grayscale value of the pixels.

具体的,预设一个窗口边长,以每个像素为中心,获取L*L的窗口,记为每个像素的基准窗口。获取每个像素的基准窗口中所有像素的灰度值的方差,记为每个像素的基准窗口的灰度方差,将支气管图像中所有像素的灰度方差的均值记为基准方差。Specifically, preset a window side length , with each pixel as the center, obtain the L*L window, which is recorded as the reference window of each pixel. The variance of the gray value of all pixels in the reference window of each pixel is obtained, recorded as the gray variance of the reference window of each pixel, and the mean value of the gray variance of all pixels in the bronchial image is recorded as the reference variance.

本实施例以L取7为例进行叙述,其他实施例可以取其他值,本实施例不进行具体限制。In this embodiment, L is set to 7 as an example for description. In other embodiments, other values may be used, and there is no specific limitation in this embodiment.

需要说明的是,由于上述过程只是人为划分了一些窗口,此时还不能将不同的像素分割开,将相似的像素分割在一起,因而需基于该窗口进行调整。It should be noted that since the above process only artificially divides some windows, it is not possible to separate different pixels and separate similar pixels together at this time, so adjustments need to be made based on the window.

进一步的,对于任意一个像素,将每个像素的基准窗口记为第一扩展窗口,将第一扩展窗口的灰度方差与基准方差的差值记为第一扩展窗口的偏差。根据第一扩展窗口的偏差得到第二扩展窗口,获取第二扩展窗口的灰度方差,将第二扩展窗口的灰度方差与基准方差的差值记为第二扩展窗口的偏差,根据第二扩展窗口的偏差得到第三扩展窗口。以此类推,直至扩展窗口中像素个数大于等于预设最大临界值B、小于等于预设最小临界值A或者扩展窗口的偏差等于0时结束,得到若干个扩展窗口。Further, for any pixel, the reference window of each pixel is recorded as the first extended window, and the difference between the grayscale variance of the first extended window and the reference variance is recorded as the deviation of the first extended window. The second extended window is obtained according to the deviation of the first extended window, and the grayscale variance of the second extended window is obtained. The difference between the grayscale variance of the second extended window and the reference variance is recorded as the deviation of the second extended window. According to the second extended window The deviation of the expanded window results in a third expanded window. By analogy, it ends when the number of pixels in the expansion window is greater than or equal to the preset maximum critical value B, less than or equal to the preset minimum critical value A, or the deviation of the expansion window is equal to 0, and several expansion windows are obtained.

本实施例以A取4,B取196为例进行叙述,其他实施例可以取其他值,本实施例不进行具体限制。This embodiment uses A as 4 and B as 196 as an example for description. In other embodiments, other values may be used and are not specifically limited in this embodiment.

进一步的,根据第一扩展窗口的偏差得到第二扩展窗口的方法为:Further, the method of obtaining the second expansion window based on the deviation of the first expansion window is:

当第一扩展窗口的偏差大于0时,获取第一扩展窗口中最外面一圈的像素记为第一扩展窗口的外围像素,在第一扩展窗口的外围像素中任意选择一个像素去除得到第二扩展窗口。When the deviation of the first extended window is greater than 0, the outermost circle of pixels in the first extended window is obtained and recorded as the peripheral pixels of the first extended window. Any pixel is selected and removed from the peripheral pixels of the first extended window to obtain the second Expand window.

当第一扩展窗口的偏差小于0时,获取第一扩展窗口的外围像素,获取与第一扩展窗口的外围像素相邻的,并且不属于第一扩展窗口的像素,记为第一扩展窗口的外邻接像素。任选一个第一扩展窗口的外邻接像素添加在第一扩展窗口上,得到第二扩展窗口。When the deviation of the first expansion window is less than 0, obtain the peripheral pixels of the first expansion window, obtain the pixels adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window, and record them as the first expansion window. Outer adjacent pixels. Select any outer adjacent pixel of the first extended window and add it to the first extended window to obtain a second extended window.

进一步的,在每个像素点的所有扩展窗口中选取偏差最小的扩展窗口作为每个像素的参考区域。Further, among all the extended windows of each pixel, the extended window with the smallest deviation is selected as the reference area of each pixel.

需要说明的是,对于支气管图像的边缘处的像素,当像素的窗口的尺寸不满足要求,只需在尺寸范围内获取尽可能大的窗口。It should be noted that for pixels at the edge of the bronchial image, when the size of the pixel window does not meet the requirements, just obtain the largest window possible within the size range.

至此,得到了每个像素的参考区域,每个像素的参考区域的灰度值较为相似,其一般描述的同一属性的物体,所述同一属性的物体可以是一个独立的病灶区域,或者一个独立的高光区域,或者是一个独立的非病灶区域和非高光区域。At this point, the reference area of each pixel is obtained. The gray value of the reference area of each pixel is relatively similar. It generally describes an object with the same attribute. The object with the same attribute can be an independent focus area, or an independent highlight area, or an independent non-lesion area and non-highlight area.

修正混乱程度获取模块103,用于根据每个像素的参考区域得到每个像素的初始混乱程度,根据每个像素的初始混乱程度得到每个像素的修正混乱程度。The corrected confusion degree acquisition module 103 is used to obtain the initial confusion degree of each pixel according to the reference area of each pixel, and obtain the corrected confusion degree of each pixel according to the initial confusion degree of each pixel.

具体的,获取每个像素的参考区域的外围像素,对每个像素的参考区域的所有外围像素进行链码分析,得到每个像素的参考区域的链码序列。将每个像素的参考区域的链码序列中所有数据的方差,记为每个像素的参考区域的边缘复杂性。Specifically, peripheral pixels of the reference area of each pixel are obtained, chain code analysis is performed on all peripheral pixels of the reference area of each pixel, and a chain code sequence of the reference area of each pixel is obtained. The variance of all data in the chain code sequence of the reference area of each pixel is recorded as the edge complexity of the reference area of each pixel.

每个像素的初始混乱程度为:The initial confusion level for each pixel is:

其中,表示第i个像素的参考区域的面积,该值越大说明需要较多的像素才能达到基准方差,进而说明第i个周围的灰度差异较小,因而第i个像素的初始混乱程度越小,表示线性归一化处理,/>表示第i个像素的参考区域的边缘复杂性,该值越大说明第i个参考区域的边缘的方向变动较大,第i个像素所在的纹理的外边缘复杂性较大,因而第i个像素的初始混乱程度较大,/>表示第i个像素的初始混乱程度。in, Indicates the area of the reference area of the i-th pixel. The larger the value, the greater the number of pixels required to achieve the baseline variance, which in turn indicates that the grayscale difference around the i-th pixel is smaller, so the initial chaos of the i-th pixel is smaller. , Represents linear normalization processing,/> Indicates the edge complexity of the reference area of the i-th pixel. The larger the value, the greater the direction change of the edge of the i-th reference area. The complexity of the outer edge of the texture where the i-th pixel is located is greater, so the i-th The initial confusion of pixels is relatively large,/> Represents the initial confusion level of the i-th pixel.

需要说明的是,在计算初始混乱程度时,主要是考虑了每个像素的周围像素的灰度差异,以及每个像素所在纹理的外边缘的复杂性。但是在支气管图像中不仅病灶区域具有这种特性,对于高亮区域也会具有这种特性,因而通过初始混乱程度不能将高亮区域和病灶区域区分开来。It should be noted that when calculating the initial confusion level, the grayscale difference of the surrounding pixels of each pixel and the complexity of the outer edge of the texture where each pixel is located are mainly considered. However, in bronchial images, not only the lesion area has this characteristic, but also the highlight area. Therefore, the highlight area and the lesion area cannot be distinguished by the initial degree of confusion.

为了进一步来区分高亮区域和病灶区域需进一步分析这两种区域的区分特征。其中高亮区域一般会存在灰度值从高亮中心向四周递减的现象,而病灶区域不具有这种现象,因而可以基于该特征来对这两种区域进行区分。In order to further distinguish the highlight area and the lesion area, it is necessary to further analyze the distinguishing characteristics of these two areas. The highlight area generally has the phenomenon of gray value decreasing from the highlight center to the surrounding areas, but the lesion area does not have this phenomenon, so the two areas can be distinguished based on this feature.

进一步的,将每个像素的参考区域内的像素记为每个像素的参考像素,将每个像素的任意一个参考像素记为每个像素的目标参考像素。获取每个像素与目标参考像素构成的向量,与目标参考像素的梯度方向的夹角,记为每个像素与目标参考像素的梯度偏心角。获取每个像素的参考像素的梯度偏心角。Further, the pixels in the reference area of each pixel are recorded as the reference pixels of each pixel, and any reference pixel of each pixel is recorded as the target reference pixel of each pixel. Obtain the angle between the vector composed of each pixel and the target reference pixel and the gradient direction of the target reference pixel, and record it as the gradient eccentricity angle between each pixel and the target reference pixel. Get the gradient eccentricity angle of the reference pixel for each pixel.

每个像素的修正混乱程度的计算方法为:The corrected clutter per pixel is calculated as:

其中,表示第i个像素与参考区域的第q个像素的欧氏距离,/>表示第i个像素的第q个参考像素的梯度偏心角,该值越大说明第i个像素的第q个参考像素的梯度方向越不具有向心性,而高亮区域的灰度值具有从中心向四周递减的趋势,因而高亮区域的梯度方向一般是从高亮中心指向四周,因而当第i个像素的参考区域为高亮区域时,该像素的参考像素的梯度偏心角应该越小,因而当梯度偏心角越大时说明第i个像素为高亮区域的可能性越小,其为病灶区域的可能性越大,因为其对应的修正混乱程度越大,/>表示第i个像素的参考区域中像素的数量,/>表示距离权重,该值越大说明第i个像素与第q个像素的距离越小,因而第i像素受第q个像素的影响越大,/>表示第i个像素的初始混乱程度。/>表示第i个像素的修正混乱程度。in, Represents the Euclidean distance between the i-th pixel and the q-th pixel of the reference area, /> Represents the gradient eccentricity angle of the q-th reference pixel of the i-th pixel. The larger the value is, the less centripetal the gradient direction of the q-th reference pixel of the i-th pixel is, while the grayscale value of the highlighted area has from The center tends to decrease toward the surroundings, so the gradient direction of the highlight area is generally from the highlight center to the surroundings. Therefore, when the reference area of the i-th pixel is the highlight area, the gradient eccentricity angle of the reference pixel of the pixel should be smaller. , therefore, when the gradient eccentricity angle is larger, it means that the i-th pixel is less likely to be a highlight area and more likely to be a lesion area, because its corresponding correction degree is greater,/> Represents the number of pixels in the reference area of the i-th pixel, /> Represents the distance weight. The larger the value, the smaller the distance between the i-th pixel and the q-th pixel, so the i-th pixel is more affected by the q-th pixel./> Represents the initial confusion level of the i-th pixel. /> Indicates the degree of correction confusion of the i-th pixel.

至此,得到每个像素的修正混乱程度。该修正混乱程度是通过对每个像素符合病灶区域特征的情况进行描述得到的指标,通过该指标能够将病灶区域与非病灶区域区分开。At this point, the corrected confusion level of each pixel is obtained. The corrected confusion level is an index obtained by describing how each pixel conforms to the characteristics of the lesion area. This index can distinguish the lesion area from the non-lesion area.

图像增强模块104,用于根据每个像素的修正混乱程度进行聚类处理得到若干独立纹理区域,根据每个独立纹理区域和修正混乱程度对支气管图像进行增强处理得到增强后支气管图像。The image enhancement module 104 is configured to perform clustering processing according to the corrected confusion degree of each pixel to obtain several independent texture areas, and perform enhancement processing on the bronchial image according to each independent texture area and the corrected confusion degree to obtain an enhanced bronchial image.

需要说明的是,为了将支气管图像中病灶区域和非病灶区域分割开,需对支气管图像进行聚类处理,传统ISODATA算法方法是利用像素之间的灰度值和像素之间的距离来设置聚类距离,这种聚类方法没有考虑病灶区域的特征,因而这种聚类方式无法将病灶区域和非病灶区域分割开。因而为了实现更好的将病灶区域和非病灶区域分割开,需对该算法中的聚类距离进行修正。It should be noted that in order to separate the lesion area and non-lesion area in the bronchial image, the bronchial image needs to be clustered. The traditional ISODATA algorithm method uses the gray value between pixels and the distance between pixels to set the clustering Class distance, this clustering method does not consider the characteristics of the lesion area, so this clustering method cannot separate the lesion area and non-lesion area. Therefore, in order to better separate the lesion area and non-lesion area, the clustering distance in the algorithm needs to be corrected.

具体的,像素之间的修正聚类距离的计算方式为:Specifically, the modified clustering distance between pixels is calculated as:

其中,表示第i个像素的修正混乱程度,/>表示第j个像素的修正混乱程度,/>表示第i个像素与第j个像素之间的欧氏距离,/>表示第i个像素与第j个像素之间的修正聚类距离。in, Indicates the corrected confusion level of the i-th pixel, /> represents the corrected confusion level of the jth pixel, /> represents the Euclidean distance between the i-th pixel and the j-th pixel,/> represents the corrected clustering distance between the i-th pixel and the j-th pixel.

进一步的,设置聚类参数:聚类中心数量,其中,/>表示支气管图像中像素数量,L表示预设的窗口边长,/>表示聚类中心数量,最小类别元素数量为5,类别内部差异为10,类别合并阈值为8,在一次迭代运算中可以合并的聚类中心的最多对数为/>,最大迭代次数为30。Further, set the clustering parameters: the number of cluster centers , among which,/> Represents the number of pixels in the bronchial image, L represents the preset window side length, /> Indicates the number of cluster centers. The minimum number of category elements is 5, the intra-category difference is 10, the category merging threshold is 8, and the maximum number of pairs of cluster centers that can be merged in one iteration is/> , the maximum number of iterations is 30.

需要说明的是,聚类参数为ISODATA算法中需要人为设置的参数,由于ISODATA算法为现有算法,其设置过程较为常规,此处不再进行赘述。It should be noted that the clustering parameters are parameters that need to be manually set in the ISODATA algorithm. Since the ISODATA algorithm is an existing algorithm, its setting process is relatively routine and will not be described again here.

进一步的,基于聚类参数以及像素间的修正聚类距离,利用ISODATA算法对像素进行聚类处理得到若干独立纹理区域。Furthermore, based on the clustering parameters and the modified clustering distance between pixels, the ISODATA algorithm is used to cluster the pixels to obtain several independent texture areas.

至此,利用聚类算法完成病灶区域和非病灶区域的分割,下面需基于分割结果进行图像增强处理。At this point, the clustering algorithm is used to complete the segmentation of the lesion area and the non-lesion area. Next, image enhancement processing needs to be performed based on the segmentation results.

进一步的,获取每个独立纹理区域的所有像素的修正混乱程度的均值,记为每个独立纹理区域的修正混乱程度。利用最大值最小值归一化方法对每个独立纹理区域的修正混乱程度进行归一化处理得到每个独立纹理区域的归一化后的修正混乱程度。为了便于描述,后续将每个独立纹理区域的归一化后的修正混乱程度记为每个独立纹理区域的修正混乱程度。将支气管图像中每个独立纹理区域的每个像素的灰度值乘以独立纹理区域的修正混乱程度实现图像增强得到增强后支气管图像。Further, the mean value of the corrected confusion degree of all pixels in each independent texture area is obtained, and recorded as the corrected confusion degree of each independent texture area. The corrected confusion degree of each independent texture area is normalized using the maximum and minimum value normalization method to obtain the normalized corrected confusion degree of each independent texture area. For the convenience of description, the normalized corrected confusion degree of each independent texture area is subsequently recorded as the corrected confusion level of each independent texture area. Image enhancement is achieved by multiplying the grayscale value of each pixel of each independent texture area in the bronchial image by the corrected confusion level of the independent texture area to obtain the enhanced bronchial image.

需要说明的是,支气管图像中每个独立纹理区域的像素的灰度值乘以独立纹理区域的修正混乱程度,将得到的灰度值进行向下取整处理。It should be noted that the gray value of the pixel in each independent texture area in the bronchial image is multiplied by the corrected confusion level of the independent texture area, and the resulting gray value is rounded down.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the principles of the present invention shall be included in the protection scope of the present invention. within.

Claims (8)

1.基于机器学习的支气管镜图像辅助优化系统,其特征在于,该系统包括以下模块:1. Bronchoscope image-assisted optimization system based on machine learning, characterized in that the system includes the following modules: 图像采集模块,用于采集支气管图像;Image acquisition module, used to acquire bronchial images; 参考区域获取模块,用于根据支气管图像中像素的灰度值差异得到支气管图像中每个像素的参考区域;The reference area acquisition module is used to obtain the reference area of each pixel in the bronchial image based on the difference in gray value of the pixels in the bronchial image; 修正混乱程度获取模块,用于根据每个像素的参考区域的面积和边缘的复杂性得到每个像素的初始混乱程度,根据每个像素的初始混乱程度和参考区域中每个像素的梯度得到每个像素的修正混乱程度;The modified confusion degree acquisition module is used to obtain the initial confusion degree of each pixel based on the area and edge complexity of the reference area of each pixel, and obtain the initial confusion degree of each pixel based on the initial confusion degree of each pixel and the gradient of each pixel in the reference area. The degree of correction confusion per pixel; 其中,根据每个像素的参考区域的面积和边缘的复杂性得到每个像素的初始混乱程度,包括的具体方法为:获取每个像素的参考区域的边缘复杂性;Among them, the initial degree of confusion of each pixel is obtained according to the area and edge complexity of the reference area of each pixel, including the specific methods of: obtaining the edge complexity of the reference area of each pixel; 根据每个像素的参考区域的边缘复杂性和面积得到每个像素的初始混乱程度的计算方法为:The calculation method to obtain the initial confusion level of each pixel based on the edge complexity and area of the reference area of each pixel is: Hi=norm(-Si)×σi H i =norm(-S i )×σ i 其中,Si表示第i个像素的参考区域的面积,norm()表示线性归一化处理,σi表示第i个像素的参考区域的边缘复杂性,Hi表示第i个像素的初始混乱程度;Among them, S i represents the area of the reference area of the i-th pixel, norm() represents linear normalization processing, σ i represents the edge complexity of the reference area of the i-th pixel, and H i represents the initial chaos of the i-th pixel. degree; 其中,获取每个像素的参考区域的边缘复杂性,包括的具体方法为:获取每个像素的参考区域的外围像素,对每个像素的参考区域的所有外围像素进行链码分析,得到每个像素的参考区域的链码序列;将每个像素的参考区域的链码序列中所有数据的方差,记为每个像素的参考区域的边缘复杂性;Among them, obtaining the edge complexity of the reference area of each pixel includes the following specific methods: obtaining the peripheral pixels of the reference area of each pixel, performing chain code analysis on all peripheral pixels of the reference area of each pixel, and obtaining each The chain code sequence of the reference area of the pixel; the variance of all data in the chain code sequence of the reference area of each pixel is recorded as the edge complexity of the reference area of each pixel; 图像增强模块,用于根据每个像素的修正混乱程度得到每两个像素的修正聚类距离,根据每两个像素的修正聚类距离对支气管图像中像素进行聚类分析得到多个独立纹理区域;根据每个独立纹理区域中每个像素的修正混乱程度对支气管图像进行增强处理得到增强后支气管图像。The image enhancement module is used to obtain the corrected clustering distance of every two pixels based on the corrected degree of confusion of each pixel, and perform cluster analysis on the pixels in the bronchial image to obtain multiple independent texture areas based on the corrected clustering distance of every two pixels. ; The bronchial image is enhanced according to the corrected confusion degree of each pixel in each independent texture area to obtain the enhanced bronchial image. 2.根据权利要求1所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据支气管图像中像素的灰度值差异得到支气管图像中每个像素的参考区域,包括的具体方法为:2. The bronchoscope image-assisted optimization system based on machine learning according to claim 1, characterized in that the reference area of each pixel in the bronchial image is obtained according to the gray value difference of the pixels in the bronchial image, including a specific method for: 预设一个窗口边长L,以每个像素为中心,获取L*L的窗口,记为每个像素的基准窗口;获取每个像素的基准窗口中所有像素的灰度值的方差,记为每个像素的基准窗口的灰度方差,将支气管图像中所有像素的灰度方差的均值记为基准方差;Preset a window side length L, with each pixel as the center, obtain the L*L window, recorded as the reference window of each pixel; obtain the variance of the gray value of all pixels in the reference window of each pixel, recorded as The gray level variance of the reference window for each pixel, the mean of the gray level variance of all pixels in the bronchial image is recorded as the reference variance; 对于任意一个像素,将每个像素的基准窗口记为第一扩展窗口,将第一扩展窗口的灰度方差与基准方差的差值记为第一扩展窗口的偏差;根据第一扩展窗口的偏差得到第二扩展窗口,获取第二扩展窗口的灰度方差,将第二扩展窗口的灰度方差与基准方差的差值记为第二扩展窗口的偏差,根据第二扩展窗口的偏差得到第三扩展窗口;以此类推,直至扩展窗口中像素个数大于等于预设最大临界值B、小于等于预设最小临界值A或者扩展窗口的偏差等于0时结束,得到若干个扩展窗口;For any pixel, the reference window of each pixel is recorded as the first extended window, and the difference between the grayscale variance of the first extended window and the reference variance is recorded as the deviation of the first extended window; according to the deviation of the first extended window Obtain the second extended window, obtain the grayscale variance of the second extended window, record the difference between the grayscale variance of the second extended window and the reference variance as the deviation of the second extended window, and obtain the third extended window based on the deviation of the second extended window. Expand the window; and so on, until the number of pixels in the expansion window is greater than or equal to the preset maximum critical value B, less than or equal to the preset minimum critical value A, or the deviation of the expansion window is equal to 0, and several expansion windows are obtained; 在每个像素点的所有扩展窗口中选取偏差最小的扩展窗口作为每个像素的参考区域。Among all the extended windows of each pixel, the extended window with the smallest deviation is selected as the reference area of each pixel. 3.根据权利要求2所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据第一扩展窗口的偏差得到第二扩展窗口,包括的具体方法为:3. The bronchoscope image-assisted optimization system based on machine learning according to claim 2, characterized in that the second expansion window is obtained according to the deviation of the first expansion window, and the specific method includes: 当第一扩展窗口的偏差大于0时,获取第一扩展窗口中最外面一圈的像素记为第一扩展窗口的外围像素,在第一扩展窗口的外围像素中任意选择一个像素去除得到第二扩展窗口;When the deviation of the first extended window is greater than 0, the outermost circle of pixels in the first extended window is obtained and recorded as the peripheral pixels of the first extended window. Any pixel is selected and removed from the peripheral pixels of the first extended window to obtain the second extended window; 当第一扩展窗口的偏差小于0时,获取第一扩展窗口的外围像素,获取与第一扩展窗口的外围像素相邻的,并且不属于第一扩展窗口的像素,记为第一扩展窗口的外邻接像素,任选一个第一扩展窗口的外邻接像素添加在第一扩展窗口上,得到第二扩展窗口。When the deviation of the first expansion window is less than 0, obtain the peripheral pixels of the first expansion window, obtain the pixels adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window, and record them as the first expansion window. For outer adjacent pixels, any outer adjacent pixel of the first extended window is added to the first extended window to obtain a second extended window. 4.根据权利要求1所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据每个像素的初始混乱程度和参考区域中每个像素的梯度得到每个像素的修正混乱程度,包括的具体方法为:4. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, characterized in that the corrected degree of confusion of each pixel is obtained based on the initial degree of confusion of each pixel and the gradient of each pixel in the reference area. , the specific methods included are: 根据每个像素的参考区域中每个像素的梯度获取每个像素的参考像素的梯度偏心角;Obtain the gradient eccentricity angle of the reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel; 根据每个像素的参考像素的梯度偏心角和初始混乱程度得到每个像素的修正混乱程度的计算方法为:The calculation method for obtaining the corrected confusion degree of each pixel based on the gradient eccentricity angle of the reference pixel and the initial confusion degree of each pixel is: 其中,di,q表示第i个像素与参考区域的第q个像素的欧氏距离,表示第i个像素的第q个参考像素的梯度偏心角,Qi表示第i个像素的参考区域中像素的数量,Hi表示第i个像素的初始混乱程度,H′i表示第i个像素的修正混乱程度。Among them, d i,q represents the Euclidean distance between the i-th pixel and the q-th pixel in the reference area, represents the gradient eccentricity angle of the q-th reference pixel of the i-th pixel, Q i represents the number of pixels in the reference area of the i-th pixel, H i represents the initial disorder degree of the i-th pixel, H′ i represents the i-th Corrected clutter of pixels. 5.根据权利要求4所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据每个像素的参考区域中每个像素的梯度获取每个像素的参考像素的梯度偏心角,包括的具体方法为:5. The machine learning-based bronchoscope image auxiliary optimization system according to claim 4, characterized in that the gradient eccentricity angle of the reference pixel of each pixel is obtained according to the gradient of each pixel in the reference area of each pixel, Specific methods included are: 将每个像素的参考区域内的像素记为每个像素的参考像素,将每个像素的任意一个参考像素记为每个像素的目标参考像素;获取每个像素与目标参考像素构成的向量,与目标参考像素的梯度方向的夹角,记为每个像素与目标参考像素的梯度偏心角;Record the pixels in the reference area of each pixel as the reference pixel of each pixel, record any reference pixel of each pixel as the target reference pixel of each pixel; obtain the vector composed of each pixel and the target reference pixel, The angle with the gradient direction of the target reference pixel is recorded as the gradient eccentricity angle between each pixel and the target reference pixel; 获取每个像素的参考像素的梯度偏心角。Get the gradient eccentricity angle of the reference pixel for each pixel. 6.根据权利要求1所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据每个像素的修正混乱程度得到每两个像素的修正聚类距离,包括的具体方法为:6. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, characterized in that the corrected clustering distance of every two pixels is obtained according to the corrected degree of confusion of each pixel, and the specific method includes: 其中,H′i表示第i个像素的修正混乱程度,H′j表示第j个像素的修正混乱程度,di,j表示第i个像素与第j个像素之间的欧氏距离,D′i,j表示第i个像素与第j个像素之间的修正聚类距离。Among them, H′ i represents the corrected confusion degree of the i-th pixel, H′ j represents the corrected confusion degree of the j-th pixel, d i,j represents the Euclidean distance between the i-th pixel and the j-th pixel, D ′ i,j represents the modified clustering distance between the i-th pixel and the j-th pixel. 7.根据权利要求1所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据每两个像素的修正聚类距离对支气管图像中像素进行聚类分析得到多个独立纹理区域,包括的具体方法为:7. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, characterized in that the pixels in the bronchial image are clustered and analyzed according to the corrected clustering distance of every two pixels to obtain multiple independent texture areas. , the specific methods included are: 设置像素的聚类参数,基于聚类参数以及像素间的修正聚类距离,利用ISODATA算法对像素进行聚类处理得到若干独立纹理区域。Set the clustering parameters of the pixels. Based on the clustering parameters and the modified clustering distance between pixels, use the ISODATA algorithm to cluster the pixels to obtain several independent texture areas. 8.根据权利要求1所述基于机器学习的支气管镜图像辅助优化系统,其特征在于,所述根据每个独立纹理区域中每个像素的修正混乱程度对支气管图像进行增强处理得到增强后支气管图像,包括的具体方法为:8. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, characterized in that the bronchial image is enhanced according to the correction degree of confusion of each pixel in each independent texture area to obtain an enhanced bronchial image. , the specific methods included are: 获取每个独立纹理区域的所有像素的修正混乱程度的均值,记为每个独立纹理区域的修正混乱程度;将支气管图像中每个独立纹理区域的每个像素的灰度值乘以独立纹理区域的修正混乱程度实现图像增强得到增强后支气管图像。Obtain the mean value of the corrected confusion degree of all pixels in each independent texture area, and record it as the corrected confusion degree of each independent texture area; multiply the grayscale value of each pixel in each independent texture area in the bronchial image by the independent texture area Image enhancement is achieved by correcting the degree of confusion to obtain enhanced bronchial images.
CN202410017385.6A 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning Active CN117522719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410017385.6A CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410017385.6A CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Publications (2)

Publication Number Publication Date
CN117522719A CN117522719A (en) 2024-02-06
CN117522719B true CN117522719B (en) 2024-03-22

Family

ID=89764941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410017385.6A Active CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Country Status (1)

Country Link
CN (1) CN117522719B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893533B (en) * 2024-03-14 2024-05-28 自贡市第一人民医院 Image feature-based heart-chest ratio intelligent detection method and system
CN117934474B (en) * 2024-03-22 2024-06-11 自贡市第一人民医院 A gastrointestinal endoscopy image enhancement processing method
CN118787301B (en) * 2024-09-10 2024-11-26 大连清东科技有限公司 A bronchoscope-assisted navigation method for respiratory infectious diseases

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002015113A2 (en) * 2000-08-14 2002-02-21 University Of Maryland, Baltimore County Mammography screening to detect and classify microcalcifications
CN114648530A (en) * 2022-05-20 2022-06-21 潍坊医学院 CT image processing method
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film
CN116109644A (en) * 2023-04-14 2023-05-12 东莞市佳超五金科技有限公司 Surface defect detection method for copper-aluminum transfer bar
CN116269467A (en) * 2023-05-19 2023-06-23 中国人民解放军总医院第八医学中心 Information acquisition system before debridement of wounded patient
CN116310290A (en) * 2023-05-23 2023-06-23 山东中泳电子股份有限公司 Method for correcting swimming touch pad feedback time
CN116342583A (en) * 2023-05-15 2023-06-27 山东超越纺织有限公司 Anti-pilling performance detection method for spinning production and processing
CN116611748A (en) * 2023-07-20 2023-08-18 吴江市高瑞庭园金属制品有限公司 Titanium alloy furniture production quality monitoring system
CN116630314A (en) * 2023-07-24 2023-08-22 日照元鼎包装有限公司 Image processing-based preservation carton film coating detection method
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN117218029A (en) * 2023-09-25 2023-12-12 南京邮电大学 Night dim light image intelligent processing method based on neural network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002015113A2 (en) * 2000-08-14 2002-02-21 University Of Maryland, Baltimore County Mammography screening to detect and classify microcalcifications
CN114648530A (en) * 2022-05-20 2022-06-21 潍坊医学院 CT image processing method
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film
CN116109644A (en) * 2023-04-14 2023-05-12 东莞市佳超五金科技有限公司 Surface defect detection method for copper-aluminum transfer bar
CN116342583A (en) * 2023-05-15 2023-06-27 山东超越纺织有限公司 Anti-pilling performance detection method for spinning production and processing
CN116269467A (en) * 2023-05-19 2023-06-23 中国人民解放军总医院第八医学中心 Information acquisition system before debridement of wounded patient
CN116310290A (en) * 2023-05-23 2023-06-23 山东中泳电子股份有限公司 Method for correcting swimming touch pad feedback time
CN116611748A (en) * 2023-07-20 2023-08-18 吴江市高瑞庭园金属制品有限公司 Titanium alloy furniture production quality monitoring system
CN116630314A (en) * 2023-07-24 2023-08-22 日照元鼎包装有限公司 Image processing-based preservation carton film coating detection method
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN117218029A (en) * 2023-09-25 2023-12-12 南京邮电大学 Night dim light image intelligent processing method based on neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Differentiable Topology-Preserved Distance Transform for Pulmonary Airway Segmentation;Minghui Zhang 等;《Computer Vision and Pattern Recognition》;20220917;1-10 *
Double-lumen tubes and bronchial blockers;M. Patel 等;《BJA Education》;20230704;416-424 *
图像灰度增强算法的研究;高赟;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20070615;I138-550 *
磁瓦缺陷图像的分割与检测研究;张梦;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20210915;C042-132 *
荧光支气管镜在肺癌诊断中的应用价值;张霞 等;《实用癌症杂志》;20130925;第28卷(第5期);507-509 *

Also Published As

Publication number Publication date
CN117522719A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN117522719B (en) Bronchoscope image auxiliary optimization system based on machine learning
US12051199B2 (en) Image processing method and apparatus, server, medical image processing device and storage medium
EP1568307B1 (en) Image processing device and image processing method
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
US20190311478A1 (en) System and Method for Automatic Detection, Localization, and Semantic Segmentation of Anatomical Objects
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
WO2023137914A1 (en) Image processing method and apparatus, electronic device, and storage medium
Bai et al. Automatic segmentation of cervical region in colposcopic images using K-means
CN111754453A (en) Pulmonary tuberculosis detection method, system and storage medium based on chest X-ray images
CN116188340A (en) Intestinal endoscope image enhancement method based on image fusion
CN103295010A (en) Illumination normalization method for processing face images
CN113889238B (en) Image identification method and device, electronic equipment and storage medium
CN110555866A (en) Infrared target tracking method for improving KCF feature descriptor
Liu et al. Extracting lungs from CT images via deep convolutional neural network based segmentation and two-pass contour refinement
Manikandan et al. Segmentation and detection of pneumothorax using deep learning
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
CN114581698A (en) Target classification method based on space cross attention mechanism feature fusion
Wen et al. Pulmonary nodule detection based on convolutional block attention module
CN118297837B (en) Infrared simulator virtual image enhancement system based on image processing
CN114155234A (en) Method and device for identifying position of lung segment of focus, storage medium and electronic equipment
CN118469947A (en) Thoracic disease auxiliary diagnosis method based on neural network
Liu et al. Annotating early esophageal cancers based on two saliency levels of gastroscopic images
CN117523350A (en) Oral cavity image recognition method and system based on multi-mode characteristics and electronic equipment
CN114862868B (en) Cerebral apoplexy final infarction area division method based on CT perfusion source data
CN115222651A (en) Pulmonary nodule detection system based on improved Mask R-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant