CN102156991A - Quaternion based object optical flow tracking method - Google Patents
Quaternion based object optical flow tracking method Download PDFInfo
- Publication number
- CN102156991A CN102156991A CN 201110089324 CN201110089324A CN102156991A CN 102156991 A CN102156991 A CN 102156991A CN 201110089324 CN201110089324 CN 201110089324 CN 201110089324 A CN201110089324 A CN 201110089324A CN 102156991 A CN102156991 A CN 102156991A
- Authority
- CN
- China
- Prior art keywords
- quaternion
- optical flow
- color
- corner
- corner points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 title claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000001914 filtration Methods 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 6
- 230000007717 exclusion Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 6
- 238000012360 testing method Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及的是一种视频图像处理技术领域的方法,具体是一种基于四元数的物体光流跟踪方法。The invention relates to a method in the technical field of video image processing, in particular to a quaternion-based object optical flow tracking method.
背景技术Background technique
物体光流跟踪首先在图像序列的第一帧中检测出被跟踪物体上可以被可靠跟踪的特征点,然后采用光流估计算法依次在图像序列相邻两帧中计算这些特征点的光流,得到下一帧中特征点的位置,如此反复,得到图像序列所有帧中被跟踪物体上所有特征点的位置,以获得被跟踪物体的位置。光流估计算法在图像处理和计算机视觉领域已经使用了很多年。Lucas-Kanade光流估计算法效率高,并且能够估计稀疏特征点的光流,因此适合于实时的物体跟踪。然而,原算法仅仅基于灰度值图像,而颜色作为一个整体单元包含了更多的信息。此外,在用于物体跟踪时,一些颜色特征是很稳定的,适合于跟踪,但是它们转化到灰度值上丧失了显著性,Lucas-Kanade光流估计算法就不能很好的工作。在其他一些情况下,比如用于估计光流的两帧图像间存在光照差异,颜色信息也会比灰度信息更稳定,有助于估计更准确的光流。因此在进行光流估计时使用颜色信息是非常重要的。Object optical flow tracking first detects the feature points on the tracked object that can be reliably tracked in the first frame of the image sequence, and then uses the optical flow estimation algorithm to calculate the optical flow of these feature points in two adjacent frames of the image sequence, The position of the feature point in the next frame is obtained, and so on, and the positions of all the feature points on the tracked object in all frames of the image sequence are obtained, so as to obtain the position of the tracked object. Optical flow estimation algorithms have been used for many years in the field of image processing and computer vision. The Lucas-Kanade optical flow estimation algorithm has high efficiency and can estimate the optical flow of sparse feature points, so it is suitable for real-time object tracking. However, the original algorithm is only based on gray-value images, while color as a whole unit contains more information. In addition, when used for object tracking, some color features are very stable and suitable for tracking, but they lose their significance when converted to gray values, and the Lucas-Kanade optical flow estimation algorithm cannot work well. In some other cases, such as the difference in illumination between two frames of images used to estimate the optical flow, the color information will be more stable than the grayscale information, which helps to estimate a more accurate optical flow. Therefore it is very important to use color information when performing optical flow estimation.
经过对现有技术的文献检索发现,C.Lei和Y.Yang在《Proceedings of the 12thInternational Conference onComputer Vision》(第12届计算机视觉国际会议集)第1562页到1569页上发表的“Optical Flow Estimation on Coarse-to-Fine Region-Trees using Discrete Optimization”(在粗略到精细部位树上使用离散最优化的光流估计)文章中提出了一种使用颜色分割结果来改进光流估计结果的方法。但是该方法基于整体能量最小化,因此不适合于跟踪稀疏特征点。G.Demarcq等人在《Proceedings of 2009International Conference on Image Processing》(2009年图像处理国际会议集)第481页到484页上发表的“The Color Monogenic Signal:A new framework for color image processing.application to color optical flow”(颜色单基因信号:彩色图像处理的新框架及其在彩色光流估计中的应用)文章中基于颜色图像的颜色单基因信号,提出了局部颜色相位的概念,并使用局部颜色相位来估计光流。P.Golland and A.M.Bruckstein在《Computer Vision and Image Understanding》(计算机视觉和图像理解)杂志1997年第68卷第3期346页到362页上发表的“Motion from Color”(由颜色获得运动信息)文章中,以及Kelson Aires等人在《Proceedings of 2008 ACM symposium on Applied computing》(2008年ACM应用计算研讨会论文集)第1607页到1611页上发表的“Optical flowusing color information:preliminary results”(使用颜色信息的光流估计:初步结果)文章中使用了直接的三通道方法,在RGB三通道上分别应用Lucas-Kanade光流约束,并合并这些约束,以估计光流。也有其他的一些研究工作将颜色从RGB空间转换到其他颜色空间,并在转换后的空间估计光流,比如X.Xiang等人在《Proceedings of 2009 International Conference on Test and Measurement》(2009年测试和测量国际会议集)第378页到381页上发表的“A method of optical flow computation based on LUV color space”(基于LUV颜色空间的一种光流计算方法)文章中将颜色从RGB空间转换到LUV空间,而前述P.Golland and A.M.Bruckstein所著文章中将颜色从RGB空间转换到HSV空间。在使用RGB空间或其他颜色空间来估计光流方面,以前的工作还是将颜色视为分离的通道信号,而非把颜色视为一个整体信号。这促使寻找一种新的彩色光流估计算法,以整体利用颜色信息,提高光流估计的准确度。此外,成功的物体光流跟踪需要检测并跟踪可靠的特征点。直观上,颜色特征在跟踪上是可靠的,在有光照变化的情况下也比灰度特征稳定。这促使寻找一种适合于物体光流跟踪的颜色特征,配合彩色光流估计算法,提高物体跟踪的准确率。After searching the literature of the prior art, it was found that "Optical Flow Estimation" published by C.Lei and Y.Yang in "Proceedings of the 12th International Conference on Computer Vision" (the 12th International Conference on Computer Vision) on pages 1562 to 1569 On Coarse-to-Fine Region-Trees using Discrete Optimization"(Using Discrete Optimal Flow Estimation on Coarse to Fine Part Trees) article proposes a method of using color segmentation results to improve optical flow estimation results. But this method is based on overall energy minimization, so it is not suitable for tracking sparse feature points. G. Demarcq et al published "The Color Monogenic Signal: A new framework for color image processing. Optical flow" (color single gene signal: a new framework for color image processing and its application in color optical flow estimation) article based on the color single gene signal of color image, the concept of local color phase is proposed, and the local color phase is used to estimate the optical flow. P.Golland and A.M.Bruckstein published "Motion from Color" (obtaining motion information from color) in "Computer Vision and Image Understanding" (Computer Vision and Image Understanding) magazine, Volume 68, Issue 3, Page 346 to 362, 1997 In the article, and "Optical flow using color information: preliminary results" (using Optical Flow Estimation of Color Information: Preliminary Results) article uses a direct three-channel method, applying Lucas-Kanade optical flow constraints on the RGB three channels separately, and merging these constraints to estimate optical flow. There are also some other research work that converts the color from the RGB space to other color spaces, and estimates the optical flow in the converted space, such as X.Xiang et al. in "Proceedings of 2009 International Conference on Test and Measurement" (2009 test and Measurement of the International Conference Collection) published on pages 378 to 381 "A method of optical flow computation based on LUV color space" (an optical flow calculation method based on LUV color space) article converts color from RGB space to LUV space, while the aforementioned article by P.Golland and A.M.Bruckstein converts color from RGB space to HSV space. In terms of using RGB space or other color spaces to estimate optical flow, previous work still regards color as a separate channel signal, rather than treating color as an overall signal. This motivates the search for a new color optical flow estimation algorithm that utilizes color information holistically and improves the accuracy of optical flow estimation. Furthermore, successful object flow tracking requires the detection and tracking of reliable feature points. Intuitively, color features are reliable in tracking and more stable than grayscale features in the presence of illumination changes. This prompts to find a color feature suitable for object optical flow tracking, and cooperate with the color optical flow estimation algorithm to improve the accuracy of object tracking.
发明内容Contents of the invention
本发明针对现有技术存在的上述不足,提供一种基于四元数的物体光流跟踪方法,利用了颜色的四元数表示,以整体信号的形式来处理颜色并估计光流。通过这种方式在具有空间颜色变化的像素位置能获得更准确的光流估计,从而能更健壮地跟踪物体上的特征点,降低跟踪误差。本发明同时采用四元数颜色角点,作为跟踪时可靠的特征点。采用四元数颜色角点和和灰度值角点共同构成跟踪时良好的特征点集,结合四元数光流估计算法进行物体光流跟踪。The present invention aims at the above-mentioned deficiencies in the prior art, and provides a quaternion-based object optical flow tracking method, which utilizes the quaternion representation of the color, processes the color and estimates the optical flow in the form of an overall signal. In this way, more accurate optical flow estimation can be obtained at pixel positions with spatial color changes, so that feature points on objects can be tracked more robustly and tracking errors can be reduced. At the same time, the present invention adopts quaternion color corner points as reliable feature points during tracking. The quaternion color corner points and the gray value corner points are used to form a good feature point set for tracking, combined with the quaternion optical flow estimation algorithm for object optical flow tracking.
本发明是通过以下技术方案实现的,本发明包括以下步骤:The present invention is achieved through the following technical solutions, and the present invention comprises the following steps:
第一步、在图像序列的第一帧I中设定待跟踪目标的范围并在此范围内检测Harris角点,并保留角点程度度量大于阈值γ的nh个Harris角点,具体步骤包括:The first step is to set the range of the target to be tracked in the first frame I of the image sequence and detect the Harris corners within this range, and retain the n h Harris corners whose corner degree measure is greater than the threshold γ, and the specific steps include :
1.1)计算图像序列的第一帧I在二维空间x和y方向的梯度Ix和Iy作为像素点局部邻域的幅值变化;1.1) Calculate the gradient I x and I y of the first frame I of the image sequence in the two-dimensional space x and y directions as the amplitude variation of the local neighborhood of the pixel point;
1.2)通过幅值相关矩阵M计算角点程度度量,以获得角点程度度量大于阈值γ的Harris角点。1.2) Calculate the corner degree measure through the magnitude correlation matrix M to obtain Harris corner points whose corner degree measure is greater than the threshold γ.
所述的幅值相关矩阵其中g(σ)为高斯平滑滤波器,σ为滤波器尺寸参数,*为卷积。The magnitude correlation matrix Where g(σ) is the Gaussian smoothing filter, σ is the filter size parameter, and * is the convolution.
所述的角点程度度量Cornerness=det(M)-k trace2(M),其中det和trace表示幅值相关矩阵M的行列式和迹,k为调整参数。The corner degree measure Cornerness=det(M)-k trace 2 (M), wherein det and trace represent the determinant and trace of the magnitude correlation matrix M, and k is an adjustment parameter.
第二步、在所述待跟踪目标的范围内检测四元数颜色角点,并保留颜色角点程度度量大于阈值γq的nq个四元数颜色角点,具体步骤包括:The second step, detecting quaternion color corner points within the range of the target to be tracked, and retaining n q quaternion color corner points whose color corner degree measure is greater than threshold γ q , the specific steps include:
2.1)将图像序列的第一帧I表示为纯四元数矩阵Iq,对Iq进行如下的四元数滤波,并分别取滤波结果的饱和度表示x方向的颜色变化程度Cx以及y方向的颜色变化程度Cy:2.1) Express the first frame I of the image sequence as a pure quaternion matrix I q , perform the following quaternion filtering on I q , and take the saturation of the filtering result to represent the color change degree C x and y in the x direction The degree of color change C y in the direction:
其中:R=Seμπ/4=S{cosπ/4+sinπ/4},μ是单位纯四元数 Where: R=Se μπ/4 =S{cosπ/4+sinπ/4}, μ is the unit pure quaternion
2.2)通过颜色相关矩阵Mq计算颜色角点程度度量,以获得颜色角点程度度量大于阈值γq的四元数颜色角点。2.2) Calculate the degree measure of the color corner point through the color correlation matrix M q to obtain the quaternion color corner points whose color corner degree measure is greater than the threshold γ q .
所述的颜色相关矩阵其中g(σ)为高斯平滑滤波器,σ为滤波器尺寸参数,*为卷积。The color correlation matrix Where g(σ) is the Gaussian smoothing filter, σ is the filter size parameter, and * is the convolution.
所述的颜色角点程度度量是指:Cornernessq=det(Mq)/(trace(Mq)+δ),其中:det和trace表示颜色相关矩阵M的行列式和迹,δ=2.2204e-016。The measure of the color corner degree refers to: Cornerness q = det(M q )/(trace(M q )+δ), wherein: det and trace represent the determinant and trace of the color correlation matrix M, and δ=2.2204e -016.
第三步、结合nh个Harris角点以及nq个四元数颜色角点作为光流跟踪的特征集,对特征集中的每一个特征点,采用基于四元数的光流估计算法在图像序列的相邻两帧间进行光流估计,用所得到的光流值更新特征点在第二帧的位置并实现跟踪,具体步骤包括:The third step is to combine n h Harris corner points and n q quaternion color corner points as the feature set of optical flow tracking. For each feature point in the feature set, use the quaternion-based optical flow estimation algorithm in the image The optical flow is estimated between two adjacent frames of the sequence, and the obtained optical flow value is used to update the position of the feature point in the second frame and realize the tracking. The specific steps include:
3.1)t时刻时,在图像序列相邻两帧It和It+1间使用下述步骤估计特征点集中每一个特征点的光流,并把t时刻该特征点的位置加上其光流,得到t+1时刻该特征点的位置,并进行t+1时刻的光流估计。初始时t=1。3.1) At time t, use the following steps to estimate the optical flow of each feature point in the feature point set between two adjacent frames I t and I t+1 of the image sequence, and add the position of the feature point at time t to its optical flow Flow, get the position of the feature point at time t+1, and perform optical flow estimation at time t+1. Initially t=1.
3.2)将It和It+1分别进行下采样并生成p层图像金字塔It,l和It+1,l,l∈[1,p]为图像金字塔层号。首先在金字塔顶层进行步骤3.3)和3.4)所述的光流估计,所获得的光流放大至金字塔下一层面,作为该层面光流的初始值,再次进行步骤3.3)和3.4)所述的光流估计。如此反复,直至金字塔底层,得到最终估计的光流。为了进一步提高估计准确度,步骤3.3)和3.4)所述的光流估计在图像金字塔每一层面都进行q次循环。3.2) Downsample I t and It +1 respectively and generate p-level image pyramids I t,l and I t+1,l ,l∈[1,p] are image pyramid layer numbers. First, perform the optical flow estimation described in steps 3.3) and 3.4) at the top of the pyramid, and the obtained optical flow is enlarged to the next level of the pyramid, as the initial value of the optical flow at this level, and then perform steps 3.3) and 3.4) again Optical flow estimation. Repeat this until the bottom of the pyramid to get the final estimated optical flow. In order to further improve the estimation accuracy, the optical flow estimation described in steps 3.3) and 3.4) performs q cycles at each level of the image pyramid.
3.3)在图像金字塔第l层,将It,l和It+1,l分别表示为纯四元数图像和给定特征集中的Harris角点或四元数颜色角点中任一在三维空间的坐标为(x,y,t),取其空间方向上w×w大小的局部邻域;令该邻域中心(x,y)处像素和另一像素i的四元数颜色值分别为Q(c)和Q(i),计算Q(c)和Q(i)的四元数内积以表示像素i和中心像素的颜色相似程度,当颜色相似程度小于阈值α则排除像素i不执行以下步骤。3.3) In the first layer of the image pyramid, represent I t, l and I t+1, l as pure quaternion images respectively and The coordinates of any of the Harris corners or quaternion color corners in the given feature set are (x, y, t) in three-dimensional space, and the local neighborhood of w×w size in the spatial direction is taken; the neighborhood The quaternion color values of the pixel at the center (x,y) and another pixel i are Q (c) and Q (i) respectively, and the quaternion inner product of Q (c) and Q (i) is calculated to represent the pixel The color similarity between i and the central pixel, when the color similarity is less than the threshold α, exclude pixel i and do not perform the following steps.
所述的颜色相似程度是指Li=<Q(i),Q(c)>/(|Q(i)|·|Q(c)|),其中:<·,·>代表四元数内积,|·|代表四元数模。The color similarity refers to L i =<Q (i) , Q (c) >/(|Q (i) |·|Q (c) |), wherein: <·,·> represents a quaternion The inner product, |·| represents the quaternion modulus.
3.4)计算四元数图像在空间x和y方向上的四元数梯度Qx、Qy。给定特征集中的Harris角点或四元数颜色角点中任一在三维空间的坐标为(x,y,t),取其空间方向上w×w大小的局部邻域,取中初始光流指向的相对应的邻域,计算该邻域时间方向的四元数梯度Qt。当经过3.3)排除步骤后,取邻域内剩余的像素点并得到四元数光流等式:其中:是待估计的光流,数字上标表示邻域内剩余的像素点,上式的四元数矩阵形式为Aqvq=bq,Aq是n×2纯四元数矩阵,bq是n×1纯四元数向量,vq是2×1四元数向量,n×1四元数向量q的模定义为||q||=sqrt(∑n|qn|2),|qn|是每个四元数元素的模。计算vq=(Aq HAq)-1Aq Hbq,所求的光流为其中S()表示取四元数的标量部分。3.4) Calculate the quaternion image Quaternion gradients Q x , Q y in spatial x and y directions. The coordinates of any of the Harris corner points or quaternion color corner points in the given feature set in the three-dimensional space are (x, y, t), and the local neighborhood of w×w size in the space direction is taken, and the In the corresponding neighborhood pointed by the initial optical flow, calculate the quaternion gradient Q t in the time direction of the neighborhood. After 3.3) the exclusion step, take the remaining pixels in the neighborhood and get the quaternion optical flow equation: in: is the optical flow to be estimated, and the digital superscript indicates the remaining pixels in the neighborhood. The quaternion matrix form of the above formula is A q v q = b q , A q is an n×2 pure quaternion matrix, and b q is n×1 pure quaternion vector, v q is a 2×1 quaternion vector, The modulus of n×1 quaternion vector q is defined as ||q||=sqrt(∑ n |q n | 2 ), where |q n | is the modulus of each quaternion element. Calculate v q =(A q H A q ) -1 A q H b q , the obtained optical flow is Among them, S() means to take the scalar part of the quaternion.
本发明的原理是,采用颜色的四元数表示方式,以整体信号方式处理颜色,基于四元数检测的颜色角点,和灰度角点一起构成物体跟踪的特征点集,减少所需特征点数,提高跟踪速度,采用基于四元数的光流估计算法,对物体进行光流跟踪。对彩色图像进行四元数滤波获取颜色变化程度,计算滤波结果的饱和度以检测颜色角点。基于四元数的光流估计算法,在有颜色变化的区域能更准确的估计光流,并在局部邻域排除颜色差异程度大的像素,更加符合光流一致性假设,在进行物体光流跟踪时,更多的特征点保持在正确的位置,实现了健壮的跟踪。The principle of the present invention is to use the quaternion representation of the color, process the color as a whole signal, and use the color corner points detected by the quaternion to form a feature point set for object tracking together with the gray corner points to reduce the required features. Points, improve the tracking speed, and use the optical flow estimation algorithm based on quaternion to track the object with optical flow. Perform quaternion filtering on the color image to obtain the degree of color change, and calculate the saturation of the filtering result to detect color corners. The optical flow estimation algorithm based on quaternions can estimate the optical flow more accurately in areas with color changes, and exclude pixels with large color differences in the local neighborhood, which is more in line with the assumption of optical flow consistency. When performing object optical flow When tracking, more feature points are kept in the correct position, enabling robust tracking.
与现有技术相比,本发明基于四元数检测颜色角点,结合灰度角点作为物体光流跟踪的特征点集,基于四元数的光流估计算法,以整体信号方式而非分开的三通道方式处理颜色,在公开的光流估计评价测试中光流估计误差比Lucas-Kanada算法降低了11.9%,在物体光流跟踪实验中,有效地降低了特征点位置错误。Compared with the prior art, the present invention detects color corners based on quaternions, combines gray-scale corners as feature point sets for optical flow tracking of objects, and uses quaternion-based optical flow estimation algorithms to use integral signals instead of separate In the public optical flow estimation evaluation test, the optical flow estimation error is reduced by 11.9% compared with the Lucas-Kanada algorithm. In the object optical flow tracking experiment, the feature point position error is effectively reduced.
本发明在公开的光流估计测试集上进行了四元数光流估计算法的定量评估,在Middlebury光流估计标准测试集上光流估计误差比Lucas-Kanada算法降低了11.9%。结合四元数颜色角点进行物体跟踪也提高了跟踪的准确率,证明了本发明能实现实时的、健壮的跟踪。The invention performs quantitative evaluation on the quaternion optical flow estimation algorithm on the disclosed optical flow estimation test set, and the optical flow estimation error on the Middlebury optical flow estimation standard test set is 11.9% lower than that of the Lucas-Kanada algorithm. The object tracking combined with the quaternion color corner points also improves the tracking accuracy, which proves that the present invention can realize real-time and robust tracking.
附图说明Description of drawings
图1是本发明方法进行物体光流跟踪的流程图。Fig. 1 is a flow chart of object optical flow tracking by the method of the present invention.
图2是实施例四元数颜色角点和灰度角点区分图。Fig. 2 is a diagram showing the distinction between quaternion color corners and grayscale corners in an embodiment.
图3是实施例的物体光流跟踪结果图。Fig. 3 is a diagram of object optical flow tracking results of the embodiment.
具体实施方式Detailed ways
下面对本发明的实施例作详细说明,本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following implementation example.
实施例:Example:
第一步、在图像序列的第一帧I中设定待跟踪目标的范围并在此范围内检测Harris角点,并保留角点程度度量大于阈值γ的nh个Harris角点,该实施例中,γ=2000,具体步骤包括:In the first step, set the range of the target to be tracked in the first frame I of the image sequence and detect the Harris corner points within this range, and retain the n h Harris corner points whose corner degree measure is greater than the threshold γ, this embodiment Among them, γ=2000, the specific steps include:
1.1)计算图像序列的第一帧I在二维空间x和y方向的梯度Ix和Iy作为像素点局部邻域的幅值变化;1.1) Calculate the gradient I x and I y of the first frame I of the image sequence in the two-dimensional space x and y directions as the amplitude variation of the local neighborhood of the pixel point;
1.2)通过幅值相关矩阵M计算角点程度度量,以获得角点程度度量大于阈值γ的Harris角点。1.2) Calculate the corner degree measure through the magnitude correlation matrix M to obtain Harris corner points whose corner degree measure is greater than the threshold γ.
所述的幅值相关矩阵其中g(σ)为高斯平滑滤波器,σ为滤波器尺寸参数,*为卷积。该实施例中,σ=3。The magnitude correlation matrix Where g(σ) is the Gaussian smoothing filter, σ is the filter size parameter, and * is the convolution. In this embodiment, σ=3.
所述的角点程度度量Cornerness=det(M)-k trace2(M),其中det和trace表示幅值相关矩阵M的行列式和迹,k为调整参数。该实施例中,k=0.04。The corner degree measure Cornerness=det(M)-k trace 2 (M), wherein det and trace represent the determinant and trace of the magnitude correlation matrix M, and k is an adjustment parameter. In this example, k=0.04.
第二步、在所述待跟踪目标的范围内检测四元数颜色角点,并保留颜色角点程度度量大于阈值γq的nq个四元数颜色角点,该实施例中,γq=4.5e-4。具体步骤包括:The second step is to detect quaternion color corner points within the scope of the target to be tracked, and retain n q quaternion color corner points whose color corner degree measure is greater than the threshold γ q , in this embodiment, γ q = 4.5e-4. Specific steps include:
2.1)将图像序列的第一帧I表示为纯四元数矩阵Iq,对Iq进行如下的四元数滤波,并分别取滤波结果的饱和度表示x方向的颜色变化程度Cx以及y方向的颜色变化程度Cy:2.1) Express the first frame I of the image sequence as a pure quaternion matrix I q , perform the following quaternion filtering on I q , and take the saturation of the filtering result to represent the color change degree C x and y in the x direction The degree of color change C y in the direction:
其中:R=Seμπ/4=S{cosπ/4+sinπ/4},μ是单位纯四元数 Where: R=Se μπ/4 =S{cosπ/4+sinπ/4}, μ is the unit pure quaternion
2.2)通过颜色相关矩阵Mq计算颜色角点程度度量,以获得颜色角点程度度量大于阈值γq的四元数颜色角点。2.2) Calculate the degree measure of the color corner point through the color correlation matrix M q to obtain the quaternion color corner points whose color corner degree measure is greater than the threshold γ q .
所述的颜色相关矩阵其中g(σ)为高斯平滑滤波器,σ为滤波器尺寸参数,*为卷积。该实施例中,σ=3The color correlation matrix Where g(σ) is the Gaussian smoothing filter, σ is the filter size parameter, and * is the convolution. In this example, σ=3
所述的颜色角点程度度量是指:Cornernessq=det(Mq)/(trace(Mq)+δ),其中:det和trace 表示颜色相关矩阵M的行列式和迹,δ=2.2204e-016。The measure of the color corner degree refers to: Cornerness q = det(M q )/(trace(M q )+δ), wherein: det and trace represent the determinant and trace of the color correlation matrix M, δ=2.2204e -016.
第三步、结合nh个Harris角点以及nq个四元数颜色角点作为光流跟踪的特征集,对特征集中的每一个特征点,采用基于四元数的光流估计算法在图像序列的相邻两帧间进行光流估计,用所得到的光流值更新特征点在第二帧的位置并实现跟踪,具体步骤包括:The third step is to combine n h Harris corner points and n q quaternion color corner points as the feature set of optical flow tracking. For each feature point in the feature set, use the quaternion-based optical flow estimation algorithm in the image The optical flow is estimated between two adjacent frames of the sequence, and the obtained optical flow value is used to update the position of the feature point in the second frame and realize the tracking. The specific steps include:
3.1)t时刻时,在图像序列相邻两帧It和It+1间使用下述步骤估计特征点集中每一个特征点的光流,并把t时刻该特征点的位置加上其光流,得到t+1时刻该特征点的位置,并进行t+1时刻的光流估计。初始时t=1。3.1) At time t, use the following steps to estimate the optical flow of each feature point in the feature point set between two adjacent frames I t and I t+1 of the image sequence, and add the position of the feature point at time t to its optical flow Flow, get the position of the feature point at time t+1, and perform optical flow estimation at time t+1. Initially t=1.
3.2)将It和It+1分别进行下采样并生成p层图像金字塔It,l和It+1,l,l∈[1,p]为图像金字塔层号。首先在金字塔顶层进行步骤3.3)和3.4)所述的光流估计,所获得的光流放大至金字塔下一层面,作为该层面光流的初始值,再次进行步骤3.3)和3.4)所述的光流估计。如此反复,直至金字塔底层,得到最终估计的光流。为了进一步提高估计准确度,步骤3.3)和3.4)所述的光流估计在图像金字塔每一层面都进行q次循环。该实施例中,p=3,q=3。3.2) Downsample I t and It +1 respectively and generate p-level image pyramids I t,l and I t+1,l ,l∈[1,p] are image pyramid layer numbers. First, perform the optical flow estimation described in steps 3.3) and 3.4) at the top of the pyramid, and the obtained optical flow is enlarged to the next level of the pyramid, as the initial value of the optical flow at this level, and then perform steps 3.3) and 3.4) again Optical flow estimation. Repeat this until the bottom of the pyramid to get the final estimated optical flow. In order to further improve the estimation accuracy, the optical flow estimation described in steps 3.3) and 3.4) performs q cycles at each level of the image pyramid. In this example, p=3 and q=3.
3.3)在图像金字塔第l层,将It,l和It+1,l分别表示为纯四元数图像和给定特征集中的Harris角点或四元数颜色角点中任一在三维空间的坐标为(x,y,t),取其空间方向上w×w大小的局部邻域;令该邻域中心(x,y)处像素和另一像素i的四元数颜色值分别为Q(c)和Q(i),计算Q(c)和Q(i)的四元数内积以表示像素i和中心像素的颜色相似程度,当颜色相似程度小于阈值α则排除像素i不执行以下步骤。该实施例中,w=9,α=0.97。3.3) In the first layer of the image pyramid, represent I t, l and I t+1, l as pure quaternion images respectively and The coordinates of any of the Harris corners or quaternion color corners in the given feature set are (x, y, t) in three-dimensional space, and the local neighborhood of w×w size in the spatial direction is taken; the neighborhood The quaternion color values of the pixel at the center (x,y) and another pixel i are Q (c) and Q (i) respectively, and the quaternion inner product of Q (c) and Q (i) is calculated to represent the pixel The color similarity between i and the central pixel, when the color similarity is less than the threshold α, exclude pixel i and do not perform the following steps. In this example, w=9 and α=0.97.
所述的颜色相似程度是指Li=<Q(i),Q(c)>/(|Q(i)|·|Q(c)|),其中:<·,·>代表四元数内积,|·|代表四元数模。The color similarity refers to L i =<Q (i) , Q (c) >/(|Q (i) |·|Q (c) |), wherein: <·,·> represents a quaternion The inner product, |·| represents the quaternion modulus.
3.4)计算四元数图像在空间x和y方向上的四元数梯度Qx、Qy。给定特征集中的Harris角点或四元数颜色角点中任一在三维空间的坐标为(x,y,t),取其空间方向上w×w大小的局部邻域,取中初始光流指向的相对应的邻域,计算该邻域时间方向的四元数梯度Qt。当经过3.3)排除步骤后,取邻域内剩余的像素点并得到四元数光流等式:其中:是待估计的光流,数字上标表示邻域内剩余的像素点,上式的四元数矩阵形式为Aqvq=bq,Aq是n×2纯四元数矩阵,bq是n×1纯四元数向量,vq是2×1四元数向量,n×1四元数向量q的模定义为||q||=sqrt(∑n|qn|2),|qn|是每个四元数元素的模。计算vq=(Aq HAq)-1Aq Hbq,所求的光流为其中S()表示取四元数的标量部分。3.4) Calculate the quaternion image Quaternion gradients Q x , Q y in spatial x and y directions. The coordinates of any of the Harris corner points or quaternion color corner points in the given feature set in the three-dimensional space are (x, y, t), and the local neighborhood of w×w size in the space direction is taken, and the In the corresponding neighborhood pointed by the initial optical flow, calculate the quaternion gradient Q t in the time direction of the neighborhood. After 3.3) the exclusion step, take the remaining pixels in the neighborhood and get the quaternion optical flow equation: in: is the optical flow to be estimated, and the digital superscript indicates the remaining pixels in the neighborhood. The quaternion matrix form of the above formula is A q v q = b q , A q is an n×2 pure quaternion matrix, and b q is n×1 pure quaternion vector, v q is a 2×1 quaternion vector, The modulus of n×1 quaternion vector q is defined as ||q||=sqrt(∑ n |q n | 2 ), where |q n | is the modulus of each quaternion element. Calculate v q =(A q H A q ) -1 A q H b q , the obtained optical flow is Among them, S() means to take the scalar part of the quaternion.
实施效果Implementation Effect
依据上述步骤,对橄榄球比赛运动视频序列进行物体(运动员)光流跟踪。该视频序列中有蓝黄方和白方双方运动员,分别对应颜色特征显著和灰度特征显著的物体,从图2中可以看出Harris角点较多分散于白方运动员,四元数颜色角点较多分散于蓝黄方运动员,二者共同构成跟踪特征集。图3为Lucas-Kanade跟踪(图3中间列)和基于四元数的光流跟踪(图3右边列)结果图,可以看出基于四元数的光流跟踪,特征点位置正确(红色框表明位置错误的特征点)。其中依据上述第三步的光流估计算法,在Middlebury光流估计标准测试集上,采用平均角度误差(AAE)和平均光流终点误差(AEPE)评估光流估计,光流估计误差降低了11.9%。所有试验均在PC计算机上实现,该PC计算机的主要参数为:中央处理器CoreTM2DuoCPU E6600@2.40GHz,内存2GB。According to the above steps, object (player) optical flow tracking is carried out on the rugby game video sequence. In this video sequence, there are blue and yellow players and white players, corresponding to objects with significant color features and gray features respectively. It can be seen from Figure 2 that Harris corner points are more scattered among white players, and the quaternion color angle Most of the points are scattered in the blue and yellow players, and the two together constitute the tracking feature set. Figure 3 is the results of Lucas-Kanade tracking (middle column in Figure 3) and optical flow tracking based on quaternion (right column in Figure 3). It can be seen that the optical flow tracking based on quaternion has the correct position of the feature points (red box Indicates a feature point in the wrong position). According to the optical flow estimation algorithm in the third step above, on the Middlebury optical flow estimation standard test set, the average angle error (AAE) and the average optical flow end point error (AEPE) are used to evaluate the optical flow estimation, and the optical flow estimation error is reduced by 11.9% %. All experiments are realized on a PC computer, the main parameters of this PC computer are: central processing unit CoreTM 2DuoCPU E6600@2.40GHz, memory 2GB.
实验表明,较之于现有的物体光流跟踪方法,本实施例采用的特征点集兼顾灰度变化显著和颜色变化显著的稳定特征,用于之后的光流估计,所采用的光流估计算法有效的降低了光流估计误差,在跟踪时更准确计算特征点位置,提高了跟踪准确率。Experiments show that compared with the existing object optical flow tracking method, the feature point set adopted in this embodiment takes into account the stable characteristics of significant grayscale changes and significant color changes, and is used for subsequent optical flow estimation. The optical flow estimation adopted The algorithm effectively reduces the optical flow estimation error, calculates the position of feature points more accurately during tracking, and improves the tracking accuracy.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110089324 CN102156991B (en) | 2011-04-11 | 2011-04-11 | Quaternion based object optical flow tracking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110089324 CN102156991B (en) | 2011-04-11 | 2011-04-11 | Quaternion based object optical flow tracking method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102156991A true CN102156991A (en) | 2011-08-17 |
CN102156991B CN102156991B (en) | 2013-05-01 |
Family
ID=44438472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110089324 Expired - Fee Related CN102156991B (en) | 2011-04-11 | 2011-04-11 | Quaternion based object optical flow tracking method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102156991B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400395A (en) * | 2013-07-24 | 2013-11-20 | 佳都新太科技股份有限公司 | Light stream tracking method based on HAAR feature detection |
CN103693532A (en) * | 2013-12-26 | 2014-04-02 | 江南大学 | Method of detecting violence in elevator car |
CN105374049A (en) * | 2014-09-01 | 2016-03-02 | 浙江宇视科技有限公司 | Multi-angle-point tracking method based on sparse optical flow method and apparatus thereof |
CN105551048A (en) * | 2015-12-21 | 2016-05-04 | 华南理工大学 | Space surface patch-based three-dimensional corner detection method |
CN106204645A (en) * | 2016-06-30 | 2016-12-07 | 南京航空航天大学 | Multi-object tracking method |
CN108460786A (en) * | 2018-01-30 | 2018-08-28 | 中国航天电子技术研究院 | A kind of high speed tracking of unmanned plane spot |
CN111382784A (en) * | 2020-03-04 | 2020-07-07 | 厦门脉视数字技术有限公司 | Moving target tracking method |
CN112529936A (en) * | 2020-11-17 | 2021-03-19 | 中山大学 | Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle |
CN113850846A (en) * | 2021-09-23 | 2021-12-28 | 地平线(上海)人工智能技术有限公司 | Image processing method and device, computer readable storage medium and electronic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6480615B1 (en) * | 1999-06-15 | 2002-11-12 | University Of Washington | Motion estimation within a sequence of data frames using optical flow with adaptive gradients |
US6559848B2 (en) * | 2000-12-13 | 2003-05-06 | Intel Corporation | Coding and decoding three-dimensional data |
CN101183460A (en) * | 2007-11-27 | 2008-05-21 | 西安电子科技大学 | Quantification method of color image background clutter |
CN101216941A (en) * | 2008-01-17 | 2008-07-09 | 上海交通大学 | Motion Estimation Method Based on Corner Matching and Optical Flow Method under Severe Illumination Changes |
US7535463B2 (en) * | 2005-06-15 | 2009-05-19 | Microsoft Corporation | Optical flow-based manipulation of graphical objects |
US20100194741A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
CN101923718A (en) * | 2009-06-12 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Optimization method of visual target tracking method based on particle filtering and optical flow vector |
CN101923719A (en) * | 2009-06-12 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Particle filter and light stream vector-based video target tracking method |
-
2011
- 2011-04-11 CN CN 201110089324 patent/CN102156991B/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6480615B1 (en) * | 1999-06-15 | 2002-11-12 | University Of Washington | Motion estimation within a sequence of data frames using optical flow with adaptive gradients |
US6559848B2 (en) * | 2000-12-13 | 2003-05-06 | Intel Corporation | Coding and decoding three-dimensional data |
US7535463B2 (en) * | 2005-06-15 | 2009-05-19 | Microsoft Corporation | Optical flow-based manipulation of graphical objects |
CN101183460A (en) * | 2007-11-27 | 2008-05-21 | 西安电子科技大学 | Quantification method of color image background clutter |
CN101216941A (en) * | 2008-01-17 | 2008-07-09 | 上海交通大学 | Motion Estimation Method Based on Corner Matching and Optical Flow Method under Severe Illumination Changes |
US20100194741A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
CN101923718A (en) * | 2009-06-12 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Optimization method of visual target tracking method based on particle filtering and optical flow vector |
CN101923719A (en) * | 2009-06-12 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Particle filter and light stream vector-based video target tracking method |
Non-Patent Citations (3)
Title |
---|
《Multimedia and Expo, 2007 IEEE International Conference on》 20070702 YI XU等 2D Quaternion Fourier Transform: The Spectrum Properties and its Application in Color Image Registration , * |
《电子器件》 20070831 基于角点检测的光流目标跟踪算法 沈美丽等 第30卷, 第4期 * |
《计算机工程》 20030831 陈震等 基于角点跟踪的光流场计算 第29卷, 第3期 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400395A (en) * | 2013-07-24 | 2013-11-20 | 佳都新太科技股份有限公司 | Light stream tracking method based on HAAR feature detection |
CN103693532A (en) * | 2013-12-26 | 2014-04-02 | 江南大学 | Method of detecting violence in elevator car |
CN103693532B (en) * | 2013-12-26 | 2016-04-27 | 江南大学 | Act of violence detection method in a kind of lift car |
CN105374049A (en) * | 2014-09-01 | 2016-03-02 | 浙江宇视科技有限公司 | Multi-angle-point tracking method based on sparse optical flow method and apparatus thereof |
CN105374049B (en) * | 2014-09-01 | 2020-01-14 | 浙江宇视科技有限公司 | Multi-corner point tracking method and device based on sparse optical flow method |
CN105551048A (en) * | 2015-12-21 | 2016-05-04 | 华南理工大学 | Space surface patch-based three-dimensional corner detection method |
CN106204645A (en) * | 2016-06-30 | 2016-12-07 | 南京航空航天大学 | Multi-object tracking method |
CN108460786A (en) * | 2018-01-30 | 2018-08-28 | 中国航天电子技术研究院 | A kind of high speed tracking of unmanned plane spot |
CN111382784A (en) * | 2020-03-04 | 2020-07-07 | 厦门脉视数字技术有限公司 | Moving target tracking method |
CN112529936A (en) * | 2020-11-17 | 2021-03-19 | 中山大学 | Monocular sparse optical flow algorithm for outdoor unmanned aerial vehicle |
CN112529936B (en) * | 2020-11-17 | 2023-09-05 | 中山大学 | A Monocular Sparse Optical Flow Algorithm for Outdoor UAVs |
CN113850846A (en) * | 2021-09-23 | 2021-12-28 | 地平线(上海)人工智能技术有限公司 | Image processing method and device, computer readable storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN102156991B (en) | 2013-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102156991B (en) | Quaternion based object optical flow tracking method | |
CN101216941B (en) | Motion Estimation Method Based on Corner Matching and Optical Flow Method under Severe Illumination Changes | |
Ding et al. | Spatio-temporal recurrent networks for event-based optical flow estimation | |
CN104574445B (en) | A kind of method for tracking target | |
CN107748873B (en) | A kind of multimodal method for tracking target merging background information | |
CN101789124B (en) | Segmentation method for space-time consistency of video sequence of parameter and depth information of known video camera | |
CN101477690B (en) | Method and device for object contour tracking in video frame sequence | |
CN103679154A (en) | Three-dimensional gesture action recognition method based on depth images | |
CN108171133B (en) | Dynamic gesture recognition method based on characteristic covariance matrix | |
CN116681679A (en) | A Small Object Segmentation Method in Medical Images Based on Dual-Branch Feature Fusion Attention | |
CN104700412B (en) | A computational method of visual saliency map | |
CN103310194A (en) | Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction | |
CN104794449B (en) | Gait energy diagram based on human body HOG features obtains and personal identification method | |
WO2019071976A1 (en) | Panoramic image saliency detection method based on regional growth and eye movement model | |
CN101739690A (en) | Method for detecting motion targets by cooperating multi-camera | |
CN111028263B (en) | Moving object segmentation method and system based on optical flow color clustering | |
CN109389617A (en) | A kind of motion estimate based on piece heterogeneous system and method for tracing and system | |
CN101650829A (en) | Method for tracing covariance matrix based on grayscale restraint | |
CN101572770A (en) | Method for testing motion available for real-time monitoring and device thereof | |
CN105118051A (en) | Saliency detecting method applied to static image human segmentation | |
CN105930793A (en) | Human body detection method based on SAE characteristic visual learning | |
CN109840498B (en) | A real-time pedestrian detection method, neural network and target detection layer | |
CN106203269A (en) | A kind of based on can the human face super-resolution processing method of deformation localized mass and system | |
CN103514610A (en) | Method for parting moving target with static background | |
CN103077383B (en) | Based on the human motion identification method of the Divisional of spatio-temporal gradient feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130501 Termination date: 20170411 |