[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118799348A - Hue and grayscale edge detection method and system based on adaptive fusion method - Google Patents

Hue and grayscale edge detection method and system based on adaptive fusion method Download PDF

Info

Publication number
CN118799348A
CN118799348A CN202410783919.6A CN202410783919A CN118799348A CN 118799348 A CN118799348 A CN 118799348A CN 202410783919 A CN202410783919 A CN 202410783919A CN 118799348 A CN118799348 A CN 118799348A
Authority
CN
China
Prior art keywords
image
edge
gray
hue
edges
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410783919.6A
Other languages
Chinese (zh)
Inventor
金义舒
黄平
郑福印
白石
邢燕好
郑钰滢
潘睿
陈皓林
史文哲
张嘉栋
李岳
侯松志
王子堃
刘朔
周洲
张国恒
姜姗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Qingheyuan Technology Co ltd
Shenyang University of Technology
Original Assignee
Shenyang Qingheyuan Technology Co ltd
Shenyang University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Qingheyuan Technology Co ltd, Shenyang University of Technology filed Critical Shenyang Qingheyuan Technology Co ltd
Priority to CN202410783919.6A priority Critical patent/CN118799348A/en
Publication of CN118799348A publication Critical patent/CN118799348A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明是一种基于自适应融合方法的色调与灰度边缘检测方法及系统,具体方法包括提取图像的灰度边缘和色调边缘;然后利用HSV色彩空间中的V通道(亮度通道)信息来自适应地融合这两种边缘信息;在自适应融合过程中,根据每个图像区块的亮度特征动态调整灰度边缘和色调边缘的融合权重。本发明的自适应融合方法则能够根据不同区域的亮度特点,选择性地强调灰度边缘或色调边缘,从而更准确地提取出图像的边缘信息,该算法不仅提高了边缘检测的准确性,还增强了算法的鲁棒性,使其能够在各种复杂场景下稳定工作。

The present invention is a hue and grayscale edge detection method and system based on an adaptive fusion method. The specific method includes extracting the grayscale edge and hue edge of an image; then adaptively fusing the two edge information using the V channel (brightness channel) information in the HSV color space; in the adaptive fusion process, dynamically adjusting the fusion weights of the grayscale edge and the hue edge according to the brightness characteristics of each image block. The adaptive fusion method of the present invention can selectively emphasize the grayscale edge or the hue edge according to the brightness characteristics of different regions, thereby more accurately extracting the edge information of the image. The algorithm not only improves the accuracy of edge detection, but also enhances the robustness of the algorithm, so that it can work stably in various complex scenes.

Description

基于自适应融合方法的色调与灰度边缘检测方法及系统Hue and grayscale edge detection method and system based on adaptive fusion method

技术领域Technical Field

本发明涉及数值图像处理技术领域,特别涉及基于自适应融合方法的色调与灰度边缘检测方法及系统。The present invention relates to the technical field of numerical image processing, and in particular to a tone and grayscale edge detection method and system based on an adaptive fusion method.

背景技术Background Art

在数字图像处理中,边缘检测是提取图像特征的关键步骤,广泛应用于图像分割、目标识别、特征提取等任务。传统的边缘检测方法主要使用如Canny、Log、Sobel算子,依赖于灰度图像的信息,但忽视了色调信息在边缘检测中的重要作用。然而,在复杂场景中,特别是当光照条件不佳或图像质量较低时,仅依赖灰度信息往往难以准确提取边缘。In digital image processing, edge detection is a key step in extracting image features and is widely used in tasks such as image segmentation, target recognition, and feature extraction. Traditional edge detection methods mainly use operators such as Canny, Log, and Sobel, which rely on the information of grayscale images, but ignore the important role of hue information in edge detection. However, in complex scenes, especially when the lighting conditions are poor or the image quality is low, it is often difficult to accurately extract edges by relying solely on grayscale information.

研究表明,直接在灰度域使用传统边缘检测算子容易造成部分边缘的丢失,实验时发现在极端光照条件下,图像灰度信息差值过小,对于灰度域的边缘提取有较大难度。Research has shown that using traditional edge detection operators directly in the grayscale domain can easily cause the loss of some edges. Experiments have found that under extreme lighting conditions, the difference in image grayscale information is too small, making it difficult to extract edges in the grayscale domain.

为了解决这一问题,学者们提出多种办法提取极端光照条件下的边缘,一些学者通过图像预处理模块来增强边缘信息,以提高边缘检测的准确性,另一些学者则尝试扩展检测域,引入新的信息作为边缘提取的条件,以增强边缘检测的鲁棒性。然而,这些方法仍然存在一些局限性。一些方法可能会导致边缘丢失或检测不准确,而另一些方法可能对噪声非常敏感,缺乏对复杂场景的鲁棒性。To solve this problem, scholars have proposed a variety of methods to extract edges under extreme lighting conditions. Some scholars enhance edge information through image preprocessing modules to improve the accuracy of edge detection, while others try to expand the detection domain and introduce new information as conditions for edge extraction to enhance the robustness of edge detection. However, these methods still have some limitations. Some methods may cause edge loss or inaccurate detection, while others may be very sensitive to noise and lack robustness to complex scenes.

近年来,一些研究者尝试将色调信息引入边缘检测中,以提高检测的准确性。但是,直接将色调信息进行边缘检测容易受到噪声的干扰,导致检测结果不准确,因此,如何有效地结合色调和灰度信息,实现准确且鲁棒的边缘检测,是当前图像处理领域亟待解决的问题。In recent years, some researchers have tried to introduce hue information into edge detection to improve the accuracy of detection. However, directly using hue information for edge detection is easily interfered by noise, resulting in inaccurate detection results. Therefore, how to effectively combine hue and grayscale information to achieve accurate and robust edge detection is an urgent problem to be solved in the current image processing field.

发明内容Summary of the invention

鉴于上述现有技术的不足之处,本发明的目的在于提供基于自适应融合方法的色调与灰度边缘检测方法及系统,旨在解决现有技术中边缘检测不准确、易受噪声干扰的问题。In view of the above-mentioned deficiencies in the prior art, the object of the present invention is to provide a hue and grayscale edge detection method and system based on an adaptive fusion method, aiming to solve the problem of inaccurate edge detection and susceptibility to noise interference in the prior art.

为了达到上述目的,本发明采取了以下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

第一方面,一种基于自适应融合方法的色调与灰度边缘检测方法,包括以下步骤:In a first aspect, a hue and grayscale edge detection method based on an adaptive fusion method comprises the following steps:

步骤一、将输入的彩色图像转换为灰度图像;Step 1: Convert the input color image into a grayscale image;

步骤二、将彩色图像从RGB空间转换到HSV空间,并提取出色调通道;Step 2: Convert the color image from RGB space to HSV space and extract the hue channel;

步骤三、对灰度图像和色调图像进行中值滤波和高斯模糊处理;Step 3: Perform median filtering and Gaussian blur processing on the grayscale image and the tone image;

步骤四、使用Sobel算子对灰度图像和色调通道进行卷积运算,分别提取灰度边缘信息和色调边缘信息;Step 4: Use the Sobel operator to perform convolution operations on the grayscale image and the hue channel to extract grayscale edge information and hue edge information respectively;

步骤五、创建一个与灰度边缘图像相同形状和数据类型的零矩阵,用于存储最终的融合结果;Step 5: Create a zero matrix with the same shape and data type as the grayscale edge image to store the final fusion result;

步骤六、通过遍历图像的每一个区块,根据每个区块在V通道图像中的亮度均值,动态计算H通道边缘图像和灰度边缘图像的融合权重,并对每个区块的H通道边缘图像和灰度边缘图像进行加权融合,生成初步的融合边缘图像;Step 6: By traversing each block of the image, dynamically calculating the fusion weight of the H channel edge image and the grayscale edge image according to the brightness mean of each block in the V channel image, and performing weighted fusion on the H channel edge image and the grayscale edge image of each block to generate a preliminary fused edge image;

步骤七、设置阈值,对初步融合后的边缘图像进行像素级的修正,当灰度边缘图像和色调边缘图像在同一位置的强度均超过阈值时,将该位置的强度设为最大值;Step 7: Set a threshold to perform pixel-level correction on the edge image after preliminary fusion. When the intensity of the grayscale edge image and the hue edge image at the same position exceeds the threshold, the intensity of the position is set to the maximum value.

步骤八、对修正后的边缘图像进行归一化处理,得到最终的融合边缘图像。Step 8: Normalize the corrected edge image to obtain the final fused edge image.

进一步地,步骤二中,将彩色图像从RGB空间转换到HSV空间,提取出色调通道,在此过程中,采用cv2.cvtColor函数进行转换,并使用image[:,:,0]函数对色调维度进行提取,对色调数组进行归一化处理,将其映射到一个固定的数值范围:Furthermore, in step 2, the color image is converted from RGB space to HSV space to extract the hue channel. In this process, the cv2.cvtColor function is used for conversion, and the image[:,:,0] function is used to extract the hue dimension, and the hue array is normalized to map it to a fixed value range:

式中H、V为图像色调通道和亮度通道的数值,R、G、B分别为图像在红色、绿色和蓝色三个颜色通道的数值。Where H and V are the values of the image hue channel and brightness channel, and R, G, and B are the values of the image in the red, green, and blue color channels respectively.

将彩色图像转换为色调数组,并提取色调通道;将提取到的图像像素信息转化为色调域的信息,提取色调差异以获取由于极端光照下的灰度差异过小的图像信息。The color image is converted into a tone array and the tone channel is extracted; the extracted image pixel information is converted into tone domain information, and the tone difference is extracted to obtain image information whose grayscale difference is too small under extreme lighting.

进一步地,步骤三中,对灰度图像和色调图像进行中值滤波和高斯模糊处理,消除噪声干扰;利用cv2.medianBlur函数对图像进行中值滤波处理,取中间值作为新值赋予原图像,提取目标像素及其周围的信息,随后应用高斯模糊技术,使用cv2.Gaussian-Blur函数对图像进行平滑处理,使图像数据趋向于附近的数据。Furthermore, in step three, median filtering and Gaussian blur processing are performed on the grayscale image and the tone image to eliminate noise interference; the image is median filtered using the cv2.medianBlur function, the middle value is taken as the new value and assigned to the original image, the target pixel and its surrounding information are extracted, and then Gaussian blur technology is applied, and the image is smoothed using the cv2.Gaussian-Blur function to make the image data tend to nearby data.

进一步地,步骤四中,将输入图像从BGR空间转换到灰度空间,应用Sobel算子对灰度图像进行卷积运算,划过灰度图像得到x方向和y方向的梯度图像,通过计算梯度的幅度和方向,得到基于灰度域的Sobel边缘检测结果;Furthermore, in step 4, the input image is converted from the BGR space to the grayscale space, and the Sobel operator is applied to perform convolution operation on the grayscale image, and the gradient image in the x direction and the y direction is obtained by passing through the grayscale image, and the Sobel edge detection result based on the grayscale domain is obtained by calculating the magnitude and direction of the gradient;

使用Sobel算子对灰度图像进行卷积运算,计算图像梯度幅值及梯度方向,提取灰度边缘信息,使用cv2.Sobel及cv2.Sobel函数实现,灰度边缘信息提取的计算方法为:Use the Sobel operator to perform convolution operation on the grayscale image, calculate the image gradient amplitude and gradient direction, extract the grayscale edge information, and use cv2.Sobel and cv2.Sobel functions to implement the calculation method of grayscale edge information extraction:

式中Ax和Ay分别代表为测量x轴方向和y轴方向边缘的卷积核,Gx和Gy代表图像经过卷积计算后所得的x轴方向的图像边缘及y轴方向的边缘。G代表将两方向图像边缘融合后的完整图像边缘信息。Where Ax and Ay represent the convolution kernels for measuring the edges in the x-axis direction and the y-axis direction, respectively, Gx and Gy represent the image edges in the x-axis direction and the y-axis direction obtained after the image is convolved. G represents the complete image edge information after fusing the image edges in the two directions.

进一步地,步骤五中,创建一个与gray_edges_norm相同形状和数据类型的零矩阵fused_edges_init,用于存储最终的融合结果。Furthermore, in step 5, a zero matrix fused_edges_init with the same shape and data type as gray_edges_norm is created to store the final fusion result.

进一步地,步骤六中,首先,将零矩阵fused_edges_init通过两个嵌套的for循环遍历图像的每一个区块,这里,block_height和block_width分别定义了每个区块的高度和宽度,对于每个区块,计算其结束的行号block_end_row和列号block_end_col,从归一化后的V通道图像v_channel_norm中提取当前区块的数据,并计算其均值v_mean,根据v_mean的值,通过weight_function函数计算区块内H通道边缘图像h_edges_norm的权重weight_h,然后,计算灰度边缘图像gray_edges_norm的权重weight_gray,它是1减去weight_h,该权重的获得计算公式如下:Further, in step six, first, the zero matrix fused_edges_init is passed through two nested for loops to traverse each block of the image, where block_height and block_width define the height and width of each block respectively. For each block, its end row number block_end_row and column number block_end_col are calculated, and the data of the current block is extracted from the normalized V channel image v_channel_norm, and its mean v_mean is calculated. According to the value of v_mean, the weight weight_h of the H channel edge image h_edges_norm in the block is calculated by the weight_function function. Then, the weight weight_gray of the gray edge image gray_edges_norm is calculated, which is 1 minus weight_h. The weight is obtained by the following calculation formula:

其中,v为V通道的亮度均值,k为两边缘图像融合比例;Among them, v is the mean brightness of the V channel, and k is the fusion ratio of the two edge images;

从归一化后的灰度边缘图像gray_edges_norm和H通道边缘图像h_edges_norm中提取当前区块的数据,分别得到gray_edges_block和h_edges_block,使用前面计算得到的权重,对提取出的两个区块边缘图像进行加权融合,结果存储在当前区块位置的fused_edges_init中,加权融合的计算公式如下:Extract the data of the current block from the normalized grayscale edge image gray_edges_norm and the H channel edge image h_edges_norm to obtain gray_edges_block and h_edges_block respectively. Use the weights calculated above to perform weighted fusion on the two extracted block edge images. The result is stored in fused_edges_init at the current block position. The calculation formula for weighted fusion is as follows:

fusedblock=k×grayblock+(1-k)×hueblockfusedblock=k×grayblock+(1-k)×hueblock

进一步地,其中步骤七中,像素级的修正通过比较灰度边缘图像和色调边缘图像中每个像素点的强度,当两者均超过预设阈值时,将该像素点在融合图像中的强度设为最大值。Furthermore, in step seven, the pixel-level correction is performed by comparing the intensity of each pixel in the grayscale edge image and the hue edge image. When both exceed a preset threshold, the intensity of the pixel in the fused image is set to a maximum value.

进一步地,其中步骤八中,归一化处理使用OpenCV的cv2.normalize函数,将修正后的边缘图像归一化到指定的取值范围,并指定输出图像的数据类型。Furthermore, in step eight, the normalization process uses the cv2.normalize function of OpenCV to normalize the corrected edge image to a specified value range, and specifies the data type of the output image.

第二方面,一种如上所述的基于自适应融合方法的色调与灰度边缘检测方法所用的系统,该系统包括图像转换模块、色调提取模块、预处理模块、灰度边缘提取模块、色调边缘提取模块、区块融合模块以及像素修正模块,色调提取模块连接于图像转换模块,预处理模块连接于色调提取模块,灰度边缘提取模块和色调边缘提取模块并联于预处理模块,并连接于区块融合模块,像素修正模块连接于区块融合模块。In a second aspect, a system is provided for a hue and grayscale edge detection method based on an adaptive fusion method as described above, the system comprising an image conversion module, a hue extraction module, a preprocessing module, a grayscale edge extraction module, a hue edge extraction module, a block fusion module and a pixel correction module, the hue extraction module being connected to the image conversion module, the preprocessing module being connected to the hue extraction module, the grayscale edge extraction module and the hue edge extraction module being connected in parallel to the preprocessing module and being connected to the block fusion module, and the pixel correction module being connected to the block fusion module.

本发明所采用的技术方案具有以下有益效果:The technical solution adopted by the present invention has the following beneficial effects:

本申请中,通过自适应融合算法根据图像的局部亮度特征动态调整灰度边缘和色调边缘的融合权重,使得算法能够在不同亮度区域选择性地强调灰度边缘或色调边缘。像素点修正步骤则进一步提高了边缘检测的准确性,通过消除非边缘信息的干扰,使得检测结果更加精确。In this application, the adaptive fusion algorithm dynamically adjusts the fusion weights of grayscale edges and hue edges according to the local brightness characteristics of the image, so that the algorithm can selectively emphasize grayscale edges or hue edges in different brightness areas. The pixel correction step further improves the accuracy of edge detection and makes the detection result more accurate by eliminating the interference of non-edge information.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明基于自适应融合方法的色调与灰度边缘检测方法的流程图;FIG1 is a flow chart of a method for detecting hue and grayscale edges based on an adaptive fusion method according to the present invention;

图2为本发明基于自适应融合方法的色调与灰度边缘检测方法的系统模块结构框图;FIG2 is a system module structure block diagram of a hue and grayscale edge detection method based on an adaptive fusion method of the present invention;

图3为本发明基于自适应融合方法的色调与灰度边缘检测方法的程序实现逻辑结构框图;FIG3 is a logic structure diagram of a program implementation of a tone and grayscale edge detection method based on an adaptive fusion method of the present invention;

图4为本发明基于自适应融合方法的色调与灰度边缘检测方法的融合算法逻辑结构框图;FIG4 is a block diagram of the logic structure of the fusion algorithm of the tone and grayscale edge detection method based on the adaptive fusion method of the present invention;

图5为使用本发明的基于自适应融合方法的色调与灰度边缘检测方法的处理结果对比;FIG5 is a comparison of processing results of the hue and grayscale edge detection method based on the adaptive fusion method of the present invention;

图6为本发明的一种实施例,其中(a)为原图像、(b)为细节放大图、(c)为传统边缘检测在细节上的检测结果、(d)为本研究提出的算法在细节上的检测结果;FIG6 is an embodiment of the present invention, wherein (a) is the original image, (b) is the enlarged image of the details, (c) is the detection result of the traditional edge detection on the details, and (d) is the detection result of the algorithm proposed in this study on the details;

图7为使用本发明的单张图片随区块大小变化的处理时间。FIG. 7 shows the processing time of a single picture using the present invention as the block size changes.

具体实施方式DETAILED DESCRIPTION

为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and effect of the present invention clearer and more specific, the present invention is further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not used to limit the present invention.

需说明的是,当部件被称为“固定于”或“设置于”另一个部件,它可以直接在另一个部件上或者间接在该另一个部件上。当一个部件被称为是“连接于”另一个部件,它可以是直接连接到另一个部件或者间接连接至该另一个部件上。It should be noted that when a component is referred to as being "fixed to" or "disposed on" another component, it can be directly on the other component or indirectly on the other component. When a component is referred to as being "connected to" another component, it can be directly connected to the other component or indirectly connected to the other component.

还需说明的是,本发明实施例的附图中相同或相似的标号对应相同或相似的部件;在本发明的描述中,需要理解的是,若有术语“上”、“下”、“左”、“右”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此,附图中描述位置关系的用语仅用于示例性说明,不能理解为对本专利的限制,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。It should also be noted that the same or similar numbers in the drawings of the embodiments of the present invention correspond to the same or similar parts; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right" and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, it is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the device or element referred to must have a specific direction, be constructed and operated in a specific direction. Therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and cannot be understood as limitations on this patent. For ordinary technicians in this field, the specific meanings of the above terms can be understood according to specific circumstances.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include one or more of the features. In the description of the present invention, the meaning of "plurality" is two or more, unless otherwise clearly and specifically defined.

在本发明的实施例中,请参阅图1-图6,提供了一种基于自适应融合方法的色调与灰度边缘检测方法,具体步骤如下:In an embodiment of the present invention, referring to FIG. 1 to FIG. 6 , a method for detecting hue and grayscale edges based on an adaptive fusion method is provided, and the specific steps are as follows:

步骤一、灰度图像提取:Step 1: Grayscale image extraction:

将输入的彩色图像(如RGB图像)转化为灰度图像。转化过程通过OpenCV库中的cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)函数实现。Convert the input color image (such as RGB image) to a grayscale image. The conversion process is implemented by the cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) function in the OpenCV library.

步骤二、色调域图像提取:Step 2: Hue domain image extraction:

将彩色图像从RGB空间转换到HSV空间,提取出色调通道。在此过程中,可以采用cv2.cvtColor函数进行转换,并使用image[:,:,0]函数对色调维度进行提取。为了进行后续的梯度计算,需要对色调数组进行归一化处理,将其映射到一个固定的数值范围。Convert the color image from RGB space to HSV space and extract the hue channel. In this process, you can use the cv2.cvtColor function to convert and use the image[:,:,0] function to extract the hue dimension. In order to perform subsequent gradient calculations, the hue array needs to be normalized and mapped to a fixed value range.

将彩色图像转换为色调数组,并提取色调通道;将提取到的图像像素信息转化为色调域的信息,提取色调差异以获取由于极端光照下的灰度差异过小的图像信息。The color image is converted into a tone array and the tone channel is extracted; the extracted image pixel information is converted into tone domain information, and the tone difference is extracted to obtain image information whose grayscale difference is too small under extreme lighting.

步骤三、图像预处理:Step 3: Image preprocessing:

对灰度图像和色调图像进行中值滤波和高斯模糊处理,消除噪声干扰;本发明首先采用中值滤波技术,利用cv2.medianBlur函数对图像进行中值滤波处理,取中间值作为新值赋予原图像,提取目标像素及其周围的信息,以有效去除椒盐噪声,保持图像的边缘细节。随后应用高斯模糊技术,使用cv2.Gaussian-Blur函数对图像进行平滑处理,使图像数据趋向于附近的数据,以进一步减少图像中的高斯噪声。The grayscale image and the tone image are subjected to median filtering and Gaussian blur processing to eliminate noise interference; the present invention first adopts the median filtering technology, uses the cv2.medianBlur function to perform median filtering on the image, takes the middle value as the new value to assign to the original image, extracts the target pixel and its surrounding information, so as to effectively remove the salt and pepper noise and maintain the edge details of the image. Then the Gaussian blur technology is applied, and the cv2.Gaussian-Blur function is used to smooth the image, so that the image data tends to the nearby data, so as to further reduce the Gaussian noise in the image.

对灰度图像和色调图像进行中值滤波和高斯模糊处理。高斯模糊使用cv2.GaussianBlur函数,通过对图像中的每个像素点加权平均处理,降低噪声对边缘检测的影响。Median filtering and Gaussian blurring are performed on grayscale and tonal images. Gaussian blurring uses the cv2.GaussianBlur function to reduce the impact of noise on edge detection by weighted averaging each pixel in the image.

步骤四、灰度边缘信息提取:Step 4: Grayscale edge information extraction:

将输入图像从BGR空间转换到灰度空间,然后应用Sobel算子对灰度图像进行卷积运算,划过灰度图像得到x方向和y方向的梯度图像。最后,通过计算梯度的幅度和方向,得到基于灰度域的Sobel边缘检测结果。The input image is converted from BGR space to grayscale space, and then the Sobel operator is applied to perform convolution operation on the grayscale image, and the gradient image in the x and y directions is obtained by passing through the grayscale image. Finally, the Sobel edge detection result based on the grayscale domain is obtained by calculating the magnitude and direction of the gradient.

使用Sobel算子对灰度图像进行卷积运算,计算图像梯度幅值及梯度方向,提取灰度边缘信息。使用cv2.Sobel(image,cv2.CV_64F,0,1,ksize=5)及cv2.Sobel(image,cv2.CV_64F,1,0,ksize=5)函数实现。灰度边缘信息提取的计算方法为:Use the Sobel operator to perform convolution operation on the grayscale image, calculate the image gradient amplitude and gradient direction, and extract the grayscale edge information. Use cv2.Sobel(image,cv2.CV_64F,0,1,ksize=5) and cv2.Sobel(image,cv2.CV_64F,1,0,ksize=5) functions to implement it. The calculation method for grayscale edge information extraction is:

步骤五、色调边缘提取:Step 5: Hue edge extraction:

将输入图像从BGR空间转换到HSV空间,提取出色调通道。然后,对色调通道进行归一化处理,将其像素值范围调整到0到255之间,以便进行后续的梯度计算。随后使用Sobel算子到归一化后的色调通道,划过色调数组分别计算x方向和y方向的梯度。通过计算梯度的幅度并进行归一化和反转亮度处理,从而得到了基于色调域的Sobel边缘检测结果。The input image is converted from BGR space to HSV space and the hue channel is extracted. Then, the hue channel is normalized and its pixel value range is adjusted to between 0 and 255 for subsequent gradient calculation. Then, the Sobel operator is applied to the normalized hue channel and the gradients in the x and y directions are calculated across the hue array. By calculating the magnitude of the gradient and performing normalization and inverting brightness processing, the Sobel edge detection result based on the hue domain is obtained.

也就是说,对归一化后的色调通道使用Sobel算子进行卷积运算,分别计算x方向和y方向上每个像素点及其邻域内的色调值差异梯度的幅度,并进行归一化和反转亮度处理,得到基于色调域的Sobel边缘检测结果。同使用cv2.Sobel(image,cv2.CV_64F,0,1,ksize=5)及cv2.Sobel(image,cv2.CV_64F,1,0,ksize=5)函数实现。该算法与色调边缘的计算方法相同。That is to say, the Sobel operator is used to perform convolution operation on the normalized hue channel, and the amplitude of the hue value difference gradient of each pixel and its neighborhood in the x and y directions is calculated respectively, and normalization and inversion brightness processing are performed to obtain the Sobel edge detection result based on the hue domain. The same is implemented using cv2.Sobel(image,cv2.CV_64F,0,1,ksize=5) and cv2.Sobel(image,cv2.CV_64F,1,0,ksize=5) functions. This algorithm is the same as the calculation method of hue edge.

步骤六、进行边缘图像的区块融合:Step 6: Block fusion of edge images:

首先,创建一个与gray_edges_norm相同形状和数据类型的零矩阵fused_edges_init,用于存储最终的融合结果。通过两个嵌套的for循环遍历图像的每一个区块。这里,block_height和block_width分别定义了每个区块的高度和宽度。对于每个区块,计算其结束的行号block_end_row和列号block_end_col,以确保不会超出图像的实际范围。从归一化后的V通道图像v_channel_norm中提取当前区块的数据,并计算其均值v_mean。根据v_mean的值,通过weight_function函数计算区块内H通道边缘图像(h_edges_norm)的权重weight_h。然后,计算灰度边缘图像(gray_edges_norm)的权重weight_gray,它是1减去weight_h。该权重的获得计算公式如下:First, create a zero matrix fused_edges_init of the same shape and data type as gray_edges_norm to store the final fusion result. Use two nested for loops to traverse each block of the image. Here, block_height and block_width define the height and width of each block, respectively. For each block, calculate its ending row number block_end_row and column number block_end_col to ensure that it does not exceed the actual range of the image. Extract the data of the current block from the normalized V channel image v_channel_norm and calculate its mean v_mean. According to the value of v_mean, calculate the weight weight_h of the H channel edge image (h_edges_norm) in the block through the weight_function function. Then, calculate the weight weight_gray of the gray edge image (gray_edges_norm), which is 1 minus weight_h. The weight is obtained by the following calculation formula:

从归一化后的灰度边缘图像gray_edges_norm和H通道边缘图像h_edges_norm中提取当前区块的数据,分别得到gray_edges_block和h_edges_block。使用前面计算得到的权重,对提取出的两个区块边缘图像进行加权融合,结果存储在当前区块位置的fused_edges_init中。加权融合的计算公式如下:Extract the data of the current block from the normalized grayscale edge image gray_edges_norm and the H channel edge image h_edges_norm to obtain gray_edges_block and h_edges_block respectively. Use the weights calculated above to perform weighted fusion on the two extracted block edge images, and store the result in fused_edges_init at the current block position. The calculation formula for weighted fusion is as follows:

fusedblock==k×grayblock+1k)×hueblockfusedblock==k×grayblock+1k)×hueblock

这个函数的目的是根据V通道的亮度信息动态调整H通道和灰度通道边缘图像的融合权重。当V通道的亮度较高时,更倾向于使用H通道的边缘信息;反之,则更依赖于灰度通道的边缘信息。通过这种方式,可以得到一个结合了两种不同来源边缘信息的融合边缘图像;也就是说分别对灰度图像和色调域图像的每一个区块遍历处理,并根据计算的权重,对提取出的两个区块边缘图像进行加权融合,得到初步融合的图像随后对其进行像素级的修正,得到了最终的完整边缘图像。The purpose of this function is to dynamically adjust the fusion weights of the edge images of the H channel and the gray channel according to the brightness information of the V channel. When the brightness of the V channel is higher, the edge information of the H channel is more likely to be used; otherwise, it is more dependent on the edge information of the gray channel. In this way, a fused edge image that combines edge information from two different sources can be obtained; that is, each block of the gray image and the hue domain image is traversed and processed respectively, and the two extracted block edge images are weightedly fused according to the calculated weights to obtain a preliminary fused image, which is then corrected at the pixel level to obtain the final complete edge image.

步骤七:对初步融合后的边缘图像进行像素级的修正:Step 7: Perform pixel-level correction on the edge image after preliminary fusion:

首先设置了一个阈值threshold,这个阈值用于判断灰度边缘图像gray_edges_norm和H通道边缘图像h_edges_norm中像素点的强度。使用np.copy函数复制了初步融合的图像fused_edges_init到fused_edges,以便在不影响原始数据的情况下进行后续的修正操作。随后,通过两个嵌套的for循环遍历fused_edges中的每一个像素点。对于每一个像素点,如果它在gray_edges_norm和h_edges_norm中的值都大于设定的阈值threshold,则将该点在fused_edges中的值设为255。这个操作意味着,只有当两个边缘图像在同一位置都有显著的边缘强度时,融合后的图像在该位置才会有最大的边缘强度。使用OpenCV的cv2.normalize函数对修正后的边缘图像fused_edges进行归一化处理。这里,alpha=0和beta=255指定了归一化后的取值范围,norm_type=cv2.NORM_MINMAX表示采用最小-最大归一化,dtype=cv2.CV_8U指定了输出图像的数据类型为8位无符号整数。归一化后的图像可以直接用于显示或后续处理。First, a threshold threshold is set, which is used to determine the intensity of pixels in the grayscale edge image gray_edges_norm and the H channel edge image h_edges_norm. The np.copy function is used to copy the initial fused image fused_edges_init to fused_edges so that subsequent correction operations can be performed without affecting the original data. Subsequently, two nested for loops are used to traverse each pixel in fused_edges. For each pixel, if its value in gray_edges_norm and h_edges_norm is greater than the set threshold threshold, the value of the point in fused_edges is set to 255. This operation means that only when the two edge images have significant edge strength at the same position, the fused image will have the maximum edge strength at that position. The corrected edge image fused_edges is normalized using OpenCV's cv2.normalize function. Here, alpha = 0 and beta = 255 specify the range of values after normalization, norm_type = cv2.NORM_MINMAX indicates the use of minimum-maximum normalization, and dtype = cv2.CV_8U specifies that the data type of the output image is an 8-bit unsigned integer. The normalized image can be used directly for display or subsequent processing.

通过上述步骤,本发明的自适应融合方法则能够根据不同区域的亮度特点,选择性地强调灰度边缘或色调边缘,从而更准确地提取出图像的边缘信息。实验结果表明,本次研究提出的基于自适应融合方法的色调与灰度边缘检测算法在不同亮度区域均能取得优异的边缘检测效果。相较于传统方法,该算法不仅提高了边缘检测的准确性,还增强了算法的鲁棒性,使其能够在各种复杂场景下稳定工作。Through the above steps, the adaptive fusion method of the present invention can selectively emphasize the grayscale edge or hue edge according to the brightness characteristics of different regions, so as to more accurately extract the edge information of the image. The experimental results show that the hue and grayscale edge detection algorithm based on the adaptive fusion method proposed in this study can achieve excellent edge detection effects in different brightness regions. Compared with traditional methods, this algorithm not only improves the accuracy of edge detection, but also enhances the robustness of the algorithm, enabling it to work stably in various complex scenes.

具体的,第二方面,本发明还提供了一种基于自适应融合方法的色调与灰度边缘检测系统,该系统包括图像转换模块、色调提取模块、预处理模块、灰度边缘提取模块、色调边缘提取模块、区块融合模块以及像素修正模块,各模块按照如上所述的方法步骤进行协作,实现图像的边缘检测。Specifically, in the second aspect, the present invention also provides a hue and grayscale edge detection system based on an adaptive fusion method, the system including an image conversion module, a hue extraction module, a preprocessing module, a grayscale edge extraction module, a hue edge extraction module, a block fusion module and a pixel correction module, each module collaborates according to the method steps described above to achieve image edge detection.

其中区块融合模块用于根据V通道的亮度均值动态计算H通道边缘图像和灰度边缘图像的融合权重,并进行区块的加权融合;像素修正模块用于对初步融合后的边缘图像进行像素级的修正。The block fusion module is used to dynamically calculate the fusion weights of the H channel edge image and the grayscale edge image according to the brightness mean of the V channel, and perform weighted fusion of the blocks; the pixel correction module is used to perform pixel-level correction on the edge image after preliminary fusion.

本发明提出的基于自适应融合方法的色调与灰度边缘检测方法所用的系统,该系统包括图像转换模块、色调提取模块、预处理模块、灰度边缘提取模块、色调边缘提取模块、区块融合模块以及像素修正模块。系统模块连接关系如图2所示。The system used in the hue and grayscale edge detection method based on the adaptive fusion method proposed by the present invention includes an image conversion module, a hue extraction module, a preprocessing module, a grayscale edge extraction module, a hue edge extraction module, a block fusion module and a pixel correction module. The connection relationship of the system modules is shown in FIG2 .

本发明的具体实施逻辑如图3所示。图3的(a)描述本发明运用的Sobel边缘检测算法具体内容,原图像通过与x轴、y轴两方向的卷积核进行卷积操作,分别获得y轴方向边缘及x轴方向边缘。将两边缘融合即可获得图像的全部边缘。图3的(b)描述本发明的融合步骤,首先将原图像转化为色调域及灰度域图像,并且通过Sobel边缘检测算法进行两图像的边缘检测获得色调域边缘及灰度域边缘,随后通过本发明提出的基于区块亮度进行权重计算的自适应融合算法进行融合,最终获得完整的边缘图像。The specific implementation logic of the present invention is shown in Figure 3. Figure 3 (a) describes the specific content of the Sobel edge detection algorithm used in the present invention. The original image is convolved with the convolution kernels in the x-axis and y-axis directions to obtain the y-axis edge and the x-axis edge respectively. The two edges are fused to obtain all the edges of the image. Figure 3 (b) describes the fusion steps of the present invention. First, the original image is converted into a hue domain and a gray domain image, and the edge detection of the two images is performed by the Sobel edge detection algorithm to obtain the hue domain edge and the gray domain edge. Then, the adaptive fusion algorithm based on the block brightness proposed in the present invention is used for weight calculation to finally obtain a complete edge image.

本发明提出的基于区块亮度V进行权重计算的自适应融合算法的步骤如图4所示。其中权重k的计算在本文步骤如下所示。The steps of the adaptive fusion algorithm for weight calculation based on block brightness V proposed in the present invention are shown in FIG4 . The calculation of the weight k in this article is as follows.

本发明提出的算法处理效果如图5所示,在细节上的处理效果如图6所示。通过与传统边缘检测效果对比可证明本算法在处理效果方面要优于传统边缘检测算法。The processing effect of the algorithm proposed in the present invention is shown in Figure 5, and the processing effect in detail is shown in Figure 6. By comparing with the traditional edge detection effect, it can be proved that the processing effect of the present algorithm is better than that of the traditional edge detection algorithm.

有关区块大小的选择方面,本发明通过对比不同区块大小所对应的图像处理时间进行大量实验测量,最终获得合适的区块大小。实验结果记录如图7所示。Regarding the selection of block size, the present invention conducts a large number of experimental measurements by comparing the image processing time corresponding to different block sizes, and finally obtains a suitable block size. The experimental results are recorded as shown in FIG7 .

本领域技术人员在考虑说明书及实践这里公开的方案后,将容易想到本发明的其它实施方案。本发明旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由权利要求所指出。Those skilled in the art will readily appreciate other embodiments of the present invention after considering the specification and practicing the schemes disclosed herein. The present invention is intended to cover any variations, uses or adaptations of the present invention that follow the general principles of the present invention and include common knowledge or customary techniques in the art that are not disclosed in this disclosure. The description and examples are to be considered exemplary only, and the true scope and spirit of the present invention are indicated by the claims.

Claims (9)

1. The hue and gray level edge detection method based on the self-adaptive fusion method is characterized by comprising the following steps of:
step one, converting an input color image into a gray image;
step two, converting the color image from RGB space to HSV space, and extracting a tone channel;
Step three, carrying out median filtering and Gaussian blur processing on the gray level image and the tone image;
step four, carrying out convolution operation on the gray image and the tone channel by using a Sobel operator, and respectively extracting gray edge information and tone edge information;
step five, creating a zero matrix with the same shape and data type as the gray edge image, and storing a final fusion result;
Step six, dynamically calculating the fusion weight of the H channel edge image and the gray level edge image according to the brightness average value of each block in the V channel image by traversing each block of the image, and carrying out weighted fusion on the H channel edge image and the gray level edge image of each block to generate a preliminary fusion edge image;
Step seven, setting a threshold value, carrying out pixel-level correction on the preliminarily fused edge image, and setting the intensity of the position as the maximum value when the intensity of the gray-scale edge image and the tone edge image at the same position exceed the threshold value;
And step eight, carrying out normalization processing on the corrected edge image to obtain a final fused edge image.
2. The method for detecting hue and gray level edges based on the adaptive fusion method according to claim 1, wherein in the second step, the color image is converted from RGB space to HSV space, the hue channel is extracted, in the process, the conversion is performed by using a cv2. Cvttcolor function, the hue dimension is extracted by using an image [: 0] function, the normalization processing is performed on the hue array, and the hue array is mapped to a fixed numerical range:
wherein H, V is the value of the hue channel and the brightness channel of the image, R, G, B is the value of the image in the red, green and blue color channels respectively;
Converting the color image into a tone array and extracting a tone channel; the extracted image pixel information is converted into information of a tone domain, and a tone difference is extracted to obtain image information having an excessively small gradation difference due to extreme illumination.
3. The method for detecting hue and gray-scale edge based on the adaptive fusion method according to claim 1, wherein in the third step, median filtering and gaussian blurring processing are performed on the gray-scale image and the hue image to eliminate noise interference; the image is subjected to median filtering processing by using a cv2.Media blue function, an intermediate value is taken as a new value to be given to an original image, information of a target pixel and surrounding thereof is extracted, then a Gaussian Blur technology is applied, and the image is subjected to smoothing processing by using the cv2. Gaussian-blue function, so that image data tends to be nearby data.
4. The method for detecting the tone and gray scale edges based on the adaptive fusion method according to claim 1, wherein in the fourth step, an input image is converted from a BGR space to a gray scale space, a Sobel operator is applied to carry out convolution operation on the gray scale image, gradient images in the x direction and the y direction are obtained by dividing the gray scale image, and a Sobel edge detection result based on a gray scale domain is obtained by calculating the amplitude and the direction of the gradient;
The method comprises the steps of performing convolution operation on a gray image by using a Sobel operator, calculating the gradient amplitude and gradient direction of the image, extracting gray edge information, and realizing by using cv2.Sobel and cv2.Sobel functions, wherein the gray edge information extraction calculation method comprises the following steps:
Wherein A x and A y respectively represent convolution kernels for measuring edges in the x-axis direction and the y-axis direction, G x and G y represent edges in the x-axis direction and edges in the y-axis direction obtained by convolution calculation of the image, and G represents complete image edge information obtained by fusing the edges in the two directions.
5. The method for detecting hue and gray level edges based on the adaptive fusion method according to claim 1, wherein in the fifth step, a zero matrix fused_edges_init of the same shape and data type as the gray_edges_norm is created for storing the final fusion result.
6. The method for detecting hue and gray-scale edges based on the adaptive fusion method according to claim 1, wherein in the sixth step, first, a zero matrix full_edges_init is traversed through two nested for loops to each block of an image, where block_height and block_width define the height and width of each block respectively, for each block, a block_end_row and a column_end_col of which end are calculated, data of the current block is extracted from the normalized V-channel image v_channel_norm, and a mean value v_mean thereof is calculated, and a weight weight_h of an H-channel edge image h_edges_norm in the block is calculated through a weight_function according to the value of v_mean, and then a weight of the H-channel edge image is calculated by subtracting the weight weight_h from 1, which is calculated as follows:
wherein V is the brightness average value of the V channel, and k is the fusion ratio of the images at two edges;
Extracting data of a current block from the normalized gray-scale edge image gray_edges_norm and the H-channel edge image h_edges_norm to respectively obtain gray_edges_block and h_edges_block, and carrying out weighted fusion on the extracted two block edge images by using the weights obtained by the previous calculation, wherein the result is stored in a fused_edges_init of the current block position, and the weighted fusion has the following calculation formula:
fusedblock=k×grayblock+(1-k)×hueblock。
7. The method for detecting a tone and gray scale edge based on an adaptive fusion method according to claim 1, wherein in the seventh step, the correction of the pixel level is performed by comparing the intensity of each pixel point in the gray scale edge image and the tone edge image, and when both the intensities exceed a preset threshold value, the intensity of the pixel point in the fusion image is set to a maximum value.
8. The method for detecting a tone and gray scale edge based on the adaptive fusion method according to claim 1, wherein in the eighth step, the normalization process normalizes the corrected edge image to a specified value range using a cv2.Normal function of OpenCV, and specifies a data type of the output image.
9. The system for detecting the hue and gray edge based on the adaptive fusion method according to claim 1, wherein the system comprises an image conversion module, a hue extraction module, a preprocessing module, a gray edge extraction module, a hue edge extraction module, a block fusion module and a pixel correction module, the hue extraction module is connected with the image conversion module, the preprocessing module is connected with the hue extraction module, the gray edge extraction module and the hue edge extraction module are connected with the preprocessing module in parallel and are connected with the block fusion module, and the pixel correction module is connected with the block fusion module.
CN202410783919.6A 2024-06-18 2024-06-18 Hue and grayscale edge detection method and system based on adaptive fusion method Pending CN118799348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410783919.6A CN118799348A (en) 2024-06-18 2024-06-18 Hue and grayscale edge detection method and system based on adaptive fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410783919.6A CN118799348A (en) 2024-06-18 2024-06-18 Hue and grayscale edge detection method and system based on adaptive fusion method

Publications (1)

Publication Number Publication Date
CN118799348A true CN118799348A (en) 2024-10-18

Family

ID=93033224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410783919.6A Pending CN118799348A (en) 2024-06-18 2024-06-18 Hue and grayscale edge detection method and system based on adaptive fusion method

Country Status (1)

Country Link
CN (1) CN118799348A (en)

Similar Documents

Publication Publication Date Title
JP4160258B2 (en) A new perceptual threshold determination for gradient-based local contour detection
US7321699B2 (en) Signal intensity range transformation apparatus and method
CN107945111B (en) An Image Mosaic Method Based on SURF Feature Extraction and CS-LBP Descriptor
CN111080661A (en) Image-based line detection method and device and electronic equipment
CN108389215B (en) Edge detection method and device, computer storage medium and terminal
CN111260543A (en) An underwater image stitching method based on multi-scale image fusion and SIFT features
CN115100226B (en) A Contour Extraction Method Based on Monocular Digital Image
CN109978878A (en) Color image line segment detecting method and its system based on LSD
CN110263778A (en) A kind of meter register method and device based on image recognition
US11625886B2 (en) Storage medium storing program, training method of machine learning model, and image generating apparatus
CN108960259A (en) A kind of license plate preprocess method based on HSV
CN116681606A (en) A method, system, device and medium for enhancing underwater uneven illumination images
CN111767752B (en) Two-dimensional code identification method and device
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
CN112907460A (en) Remote sensing image enhancement method
JP3659426B2 (en) Edge detection method and edge detection apparatus
CN106815851A (en) A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement
CN111429383B (en) Image noise reduction method and device and computer readable storage medium
CN113610091A (en) Intelligent identification method and device for air switch state and storage medium
CN118799348A (en) Hue and grayscale edge detection method and system based on adaptive fusion method
CN117994156A (en) Presswork pattern matching process regulation and control method based on computer vision
CN117876233A (en) Mapping image enhancement method based on unmanned aerial vehicle remote sensing technology
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN114693543A (en) Image noise reduction method and device, image processing chip and image acquisition equipment
CN118505825B (en) Cashmere product color measurement method and device based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination