CN103279952A - Target tracking method and device - Google Patents
Target tracking method and device Download PDFInfo
- Publication number
- CN103279952A CN103279952A CN2013101831741A CN201310183174A CN103279952A CN 103279952 A CN103279952 A CN 103279952A CN 2013101831741 A CN2013101831741 A CN 2013101831741A CN 201310183174 A CN201310183174 A CN 201310183174A CN 103279952 A CN103279952 A CN 103279952A
- Authority
- CN
- China
- Prior art keywords
- image
- matched
- feature
- reference image
- feature points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 72
- 230000002159 abnormal effect Effects 0.000 claims abstract description 42
- 238000001514 detection method Methods 0.000 claims description 37
- 238000000605 extraction Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 9
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000009466 transformation Effects 0.000 description 7
- 239000000284 extract Substances 0.000 description 3
- 238000005070 sampling Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明实施例公开了一种目标跟踪方法,包括采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合,将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。本发明实施例还公开了一种目标跟踪装置。采用本发明,提取的角点特征具有良好的辨识性和稳定性,大大提高匹配的速度,同时对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。
The embodiment of the present invention discloses a target tracking method, which includes using a preset algorithm to extract corner features of a reference image to obtain a first set of feature points of the reference image, and removing out of the first set of feature points that exceed the number of points to be matched. The abnormal points in the image range are obtained by obtaining a second set of feature points, performing image matching on the second set of feature points and the image to be matched, and completing target tracking on the image to be matched according to the matching result. The embodiment of the invention also discloses a target tracking device. With the present invention, the extracted corner features have good recognition and stability, greatly improving the speed of matching, and at the same time optimize the set of feature points in the reference image to eliminate out-of-range corner points, which solves the problems in the prior art. Point out of range can not be matched, improve the reliability of matching.
Description
技术领域technical field
本发明涉及图像处理领域,尤其涉及一种目标跟踪方法和装置。The invention relates to the field of image processing, in particular to a target tracking method and device.
背景技术Background technique
目标检测技术通常也被称为图像的特征提取,是计算机视觉以及数字图像处理的概念,它是指使用计算机运算决定每个图像的像素点是否为图像特征的图像信息提取过程。目标跟踪技术就是在视频图像或者连续图像序列中确定出感兴趣区域(Region Of Interest,ROI)也被称为模板(Template)的位置,并把每一帧图像的模板对应起来。Object detection technology is also commonly referred to as image feature extraction. It is a concept of computer vision and digital image processing. It refers to the image information extraction process that uses computer operations to determine whether the pixels of each image are image features. The target tracking technology is to determine the position of the region of interest (Region Of Interest, ROI), also known as the template (Template), in the video image or continuous image sequence, and correspond to the template of each frame of image.
现有跟踪技术方案可以描述如下:The existing tracking technology solutions can be described as follows:
(1)在采集的图像序列中上确定一个模板,通常称为参考图像,参考图像中记载有需要被跟踪的目标;(1) Determine a template in the collected image sequence, which is usually called a reference image, and the target to be tracked is recorded in the reference image;
(2)为了提高运算的效率,不对参考图像内的每个像素点做变换,而是从参考图像内相隔等间距把像素点均匀提取出来,该过程可被称为等间距采样,形成原始特征点集;(2) In order to improve the efficiency of the operation, instead of transforming each pixel in the reference image, the pixels are evenly extracted from the reference image at equal intervals. This process can be called equidistant sampling to form the original feature point set;
(3)对原始特征点集及其领域信息作运算,得到新的特征点集,根据新的特征点集在待匹配图像上确定匹配区域;(3) Operate the original feature point set and its domain information to obtain a new feature point set, and determine the matching area on the image to be matched according to the new feature point set;
(4)计算匹配区域和参考模板之间的灰度信息,例如SSD(Sum of SquaredDifferences,差值平方和)方法,利用最小化误差的方法,通过迭代,使得匹配区域与参考图像之间能匹配上,即使匹配区域与参考图像中存在相同的图像信息,即需要被跟踪的目标。(4) Calculate the grayscale information between the matching area and the reference template, such as the SSD (Sum of Squared Differences, sum of squared differences) method, using the method of minimizing the error, through iteration, so that the matching area and the reference image can match Even if there is the same image information in the matching area and the reference image, that is, the target that needs to be tracked.
(5)对采集到的图像序列重复步骤(3)-(4),通过图像帧与帧之间的模板匹配,最终实现了目标连续的跟踪。(5) Repeat steps (3)-(4) for the acquired image sequence, and finally achieve continuous target tracking through template matching between image frames.
但是,现有技术存在以下缺点:However, the prior art has the following disadvantages:
(1)获得参考图像的特征点集合时,采用的是对参考图像内的像素点进行等间距采样的方法。这样得到的特征点随意性大,通常包含的图像信息较少,不能很好的表征图像特征,可靠性、稳定性不高,使得跟踪算法不具备良好的鲁棒性。(1) When obtaining the set of feature points of the reference image, a method of equidistantly sampling the pixels in the reference image is adopted. The feature points obtained in this way are very random, usually contain less image information, cannot represent the image features well, and the reliability and stability are not high, so the tracking algorithm does not have good robustness.
(2)得到参考图像后,就将其作为标准模板,不再做任何改变。但是在实际的跟踪系统中,随着摄像机等图像采集设备的运动,可能确定的参考图像会部分移出摄像机的图像采集范围,不能成像,从而参考图像的部分区域并不在后续图像序列的图像当中。然而现有方案采用初始的参考图像,会使得计算参考图像和匹配区域之间的图像灰度信息算法无法收敛,不能得到正确结果,从而导致跟踪失败。(2) After the reference image is obtained, it is used as a standard template without any changes. However, in an actual tracking system, with the movement of image acquisition devices such as cameras, the determined reference image may partially move out of the image acquisition range of the camera and cannot be imaged, so that some areas of the reference image are not in the images of the subsequent image sequence. However, the existing scheme uses the initial reference image, which will make the algorithm for calculating the image gray information between the reference image and the matching area unable to converge, and the correct result cannot be obtained, resulting in tracking failure.
发明内容Contents of the invention
本发明实施例所要解决的技术问题在于,提供一种目标跟踪方法。可解决现有技术中图像特征稳定性不高和参考图像超范围导致跟踪失败的不足。The technical problem to be solved by the embodiments of the present invention is to provide a target tracking method. It can solve the problems in the prior art that the image feature stability is not high and the reference image is out of range, which leads to tracking failure.
为了解决上述技术问题,本发明第一方面提供了一种目标跟踪方法,包括:In order to solve the above technical problems, the first aspect of the present invention provides a target tracking method, including:
采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪;Extracting the corner features of the reference image by using a preset algorithm to obtain a first set of feature points of the reference image, the reference image is used to track the target in the current image to be matched;
剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合;Eliminating abnormal points in the first set of feature points beyond the range of the image to be matched to obtain a second set of feature points;
将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。performing image matching on the second feature point set and the image to be matched, and completing target tracking on the image to be matched according to the matching result.
在第一种可能的实现方式中,所述预置算法包括:Harris角点检测算法、FAST加速分割检测特征角点检测算法、KLT角点检测算法或SUSAN最小核值相似区域角点检测算法。In a first possible implementation, the preset algorithm includes: Harris corner detection algorithm, FAST accelerated segmentation detection feature corner detection algorithm, KLT corner detection algorithm or SUSAN minimum kernel value similar area corner detection algorithm.
结合第一方面或第一种可能的实现方式,在第二种可能的实现方式中,所述采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集的步骤之前,还包括:With reference to the first aspect or the first possible implementation manner, in the second possible implementation manner, before the step of extracting corner features of the reference image using a preset algorithm to obtain the first feature point set of the reference image ,Also includes:
获取摄像机采集的图像序列,将该图像序列作为待匹配图像;Obtain the image sequence collected by the camera, and use the image sequence as the image to be matched;
并从所述图像序列的第一帧图像中确定该待匹配图像的参考图像。And determine the reference image of the image to be matched from the first frame image of the image sequence.
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合的步骤包括:With reference to the second possible implementation of the first aspect, in a third possible implementation, the removing the abnormal points in the first set of feature points beyond the range of the image to be matched to obtain the second feature points The assembly steps include:
获取所述第一特征点集合中每个特征点的坐标;Obtain the coordinates of each feature point in the first set of feature points;
根据所述特征点集合中每个特征点的坐标判断是否超出所述待匹配图像的范围,若为是,则确定该特征点为异常点;Judging according to the coordinates of each feature point in the feature point set whether it exceeds the scope of the image to be matched, if yes, then determining that the feature point is an abnormal point;
将所述第一特征点集合中所有的异常点剔除,得到第二特征点集合。All abnormal points in the first set of feature points are eliminated to obtain a second set of feature points.
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,所述将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪的步骤包括:With reference to the third possible implementation of the first aspect, in a fourth possible implementation, image matching is performed on the second feature point set and the image to be matched, and the to-be-matched image is completed according to the matching result. The steps of object tracking for matching images include:
在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图;Selecting a subimage to be matched with the same size as the reference image in the image to be matched;
采用所述预置算法提所述待匹配子图的特征得到第三特征点集合;Using the preset algorithm to extract the features of the subgraph to be matched to obtain a third set of feature points;
计算所述第二特征点集合与所述第三特征点集合之间的距离;calculating the distance between the second set of feature points and the third set of feature points;
判断所述距离是否小于预置阈值,若为是,则确定所述参考图像与所述待匹配图像中的待匹配子图匹配成功,若为否,则重新执行在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图的步骤。Judging whether the distance is smaller than a preset threshold, if yes, then determining that the reference image is successfully matched with the subimage to be matched in the image to be matched, if not, re-executing the selection process in the image to be matched A step to match subimages of the same size as the reference image.
结合第一方面的第四种可能的实现方式,在第五种可能的实现方式中,所述将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪的步骤之后,还包括:With reference to the fourth possible implementation of the first aspect, in a fifth possible implementation, image matching is performed on the second feature point set and the image to be matched, and the to-be-matched image is completed according to the matching result. After the step of matching image object tracking, it also includes:
删除所述参考图像并将与所述参考模板匹配成功的待匹配子图作为新的参考图像。The reference image is deleted and the subimage to be matched that successfully matches the reference template is used as a new reference image.
相应地,本发明第二方面还提供了一种目标跟踪装置,包括:Correspondingly, the second aspect of the present invention also provides a target tracking device, including:
特征提取模块,用于采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪;A feature extraction module, configured to extract corner features of a reference image using a preset algorithm to obtain a first set of feature points of the reference image, the reference image being used to track the target in the current image to be matched;
特征优化模块,用于剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合;A feature optimization module, configured to remove abnormal points in the first set of feature points that exceed the range of the image to be matched to obtain a second set of feature points;
图像匹配模块,用于将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。An image matching module, configured to perform image matching on the second set of feature points and the image to be matched, and complete target tracking on the image to be matched according to the matching result.
在第一种可能的实现方式中,所述特征提取模块用于采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述预置算法包括Harris角点检测算法、FAST加速分割检测特征角点检测算法、KLT角点检测算法或SUSAN最小核值相似区域角点检测算法。In a first possible implementation, the feature extraction module is used to extract corner features of the reference image using a preset algorithm to obtain a first set of feature points of the reference image, and the preset algorithm includes Harris corner detection Algorithm, FAST accelerated segmentation detection feature corner detection algorithm, KLT corner detection algorithm or SUSAN minimum kernel value similar area corner detection algorithm.
结合第二方面和第一种可能的实现方式,在第二种可能的实现方式中,还包括:In combination with the second aspect and the first possible implementation, the second possible implementation also includes:
模板确定模块,用于获取摄像机采集的图像序列,将该图像序列作为待匹配图像;并从所述图像序列的第一帧图像中确定该待匹配图像的参考图像。The template determining module is used to obtain the image sequence collected by the camera, and use the image sequence as the image to be matched; and determine the reference image of the image to be matched from the first frame image of the image sequence.
结合第二种可能的实现方式,在第三种可能的实现方式中,所述特征优化模块包括:With reference to the second possible implementation, in the third possible implementation, the feature optimization module includes:
获取单元,用于获取所述第一特征点集合中每个特征点的坐标;an acquisition unit, configured to acquire the coordinates of each feature point in the first set of feature points;
判断单元,用于根据所述特征点集合中每个特征点的坐标判断是否超出所述待匹配图像的范围,若为是,则确定该特征点为异常点;A judging unit, configured to judge whether the coordinates of each feature point in the feature point set exceed the range of the image to be matched, and if so, determine that the feature point is an abnormal point;
剔除单元,用于将所述第一特征点集合中所有的异常点剔除,得到第二特征点集合。The elimination unit is configured to eliminate all abnormal points in the first feature point set to obtain a second feature point set.
结合第三种可能的实现方式,在第四种可能的实现方式中,所述图像匹配模块包括:With reference to the third possible implementation, in a fourth possible implementation, the image matching module includes:
选取单元,用于在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图;A selecting unit, configured to select a sub-image to be matched that is the same size as the reference image in the image to be matched;
提取单元,用于采用所述预置算法提取所述待匹配子图的特征得到第三特征点集合;An extraction unit, configured to extract features of the to-be-matched subgraph using the preset algorithm to obtain a third set of feature points;
计算单元,用于计算所述第二特征点集合与所述第三特征点集合之间的距离;a calculation unit, configured to calculate the distance between the second set of feature points and the third set of feature points;
结合第四种可能的实现方式,在第五种可能的实现方式中,还包括:In combination with the fourth possible implementation, the fifth possible implementation also includes:
模板更新模块,用于删除所述参考图像并将与所述参考模板匹配成功的待匹配子图作为新的参考图像。The template updating module is configured to delete the reference image and use the subimage to be matched that successfully matches the reference template as a new reference image.
实施本发明实施例,具有如下有益效果:Implementing the embodiment of the present invention has the following beneficial effects:
通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。The feature point set of the reference image is obtained by extracting the corner features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is image-matched with the image to be matched to achieve Target Tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1是本发明实施例的一种目标跟踪方法的流程示意图;Fig. 1 is a schematic flow chart of a target tracking method according to an embodiment of the present invention;
图2是本发明实施例的一种目标跟踪方法的另一流程示意图;FIG. 2 is another schematic flowchart of a target tracking method according to an embodiment of the present invention;
图3是本发明实施例的一种目标跟踪装置的结构示意图;FIG. 3 is a schematic structural diagram of a target tracking device according to an embodiment of the present invention;
图4是本发明实施例的一种目标跟踪装置的另一结构示意图;Fig. 4 is another structural schematic diagram of a target tracking device according to an embodiment of the present invention;
图5是图4中特征优化模块的结构示意图;Fig. 5 is a schematic structural diagram of a feature optimization module in Fig. 4;
图6是图4中图像匹配模块的结构示意图;Fig. 6 is a schematic structural diagram of the image matching module in Fig. 4;
图7是本发明实施例的一种目标跟踪装置的又一结构示意图。Fig. 7 is another schematic structural diagram of an object tracking device according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
参见图1,为本发明实施例的一种目标跟踪方法的流程示意图,该方法包括:Referring to Fig. 1, it is a schematic flow chart of a target tracking method according to an embodiment of the present invention, the method comprising:
步骤101、采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪。
具体的,角点没有明确的数学定义,普遍认为角点是二维图像亮度变化剧烈的点或图像边缘曲率上曲率的极大值点。在图像的各种特征中,角点是一种稳定的、旋转不变和能克服灰度反转的有效特征。目标跟踪装置提取参考图像中的角点特征的算法可以是Harris角点检测算法、FAST角点检测算法、KLT角点检测算法或SUSAN角点检测算法,也可以是其他算法,本发明不作限制,目标跟踪装置提取参考图像的角点特征得到第一特征点集合,该第一特征点集合为一个二值图像。参考图像为记载需要被跟踪的目标的图像,采用该参考图像与待匹配图像进行匹配以实现对目标的跟踪。Specifically, there is no clear mathematical definition of the corner point, and it is generally believed that the corner point is the point where the brightness of the two-dimensional image changes sharply or the point of the maximum curvature on the edge curvature of the image. Among various features of images, corner is a stable, rotation-invariant and effective feature that can overcome grayscale inversion. The algorithm for the target tracking device to extract the corner feature in the reference image can be Harris corner detection algorithm, FAST corner detection algorithm, KLT corner detection algorithm or SUSAN corner detection algorithm, or other algorithms, which are not limited by the present invention. The target tracking device extracts corner features of the reference image to obtain a first set of feature points, and the first set of feature points is a binary image. The reference image is an image that records the target to be tracked, and the reference image is matched with the image to be matched to realize the tracking of the target.
步骤102、剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合。Step 102: Eliminate outliers in the first set of feature points that exceed the scope of the image to be matched to obtain a second set of feature points.
具体的,当待匹配图像中的目标的形状不完整时,例如目标发生遮挡或目标在待匹配图像外时,则从参考图像中提取的第一特征点集合中存在不属于待匹配图像范围之内的角点,成为异常点,若把这些异常点参与图像匹配的计算,会导致匹配计算结果无法收敛到正确结果,即参考图像和待匹配图像匹配失败。为了避免这种情况的发生,目标跟踪装置获取第一特征点集合中各个角点的坐标,根据角点的坐标和待匹配图像的坐标关系判断角点是否超出待匹配图像范围,若为是,则取得该角点为异常点,按照上述的方法剔除所有的异常点,得到第二特征点集合。Specifically, when the shape of the target in the image to be matched is incomplete, for example, when the target is occluded or the target is outside the image to be matched, there are points that do not belong to the range of the image to be matched in the first set of feature points extracted from the reference image. The corner points inside become abnormal points. If these abnormal points are involved in the calculation of image matching, the matching calculation results will not converge to the correct result, that is, the matching of the reference image and the image to be matched will fail. In order to avoid this situation, the target tracking device obtains the coordinates of each corner point in the first feature point set, and judges whether the corner point exceeds the range of the image to be matched according to the coordinate relationship between the coordinates of the corner point and the image to be matched, and if so, The corner point is obtained as an abnormal point, and all abnormal points are eliminated according to the above method to obtain the second set of feature points.
步骤103、将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。
具体的,在待匹配图像中选取一块与参考图像相同大小的待匹配子图,采用与步骤101相同的角点提取算法提取该待匹配子图的角点特征得到第三特征点集合,基于角点特征的匹配有两种方式,一种是用预定的方式描述提取到的参考图像的第二特征点集合和待匹配子图的第三特征点集合,对图像的匹配就转化为两个向量集的匹配。另一种是将提取到的特征点集合以二值图像表示,对二值图像进行距离变换,以距离相似度度量两个特征点集合的相似性。当然也可以采用其他方法,本发明不作限制。Specifically, in the image to be matched, a subimage to be matched with the same size as the reference image is selected, and the same corner point extraction algorithm as in
实施本发明的实施例,通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。In the embodiment of the present invention, the feature point set of the reference image is obtained by extracting the corner point features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is combined with the feature point set to be matched. The images are image-matched for object tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
参见图2,为本发明实施例的一种目标跟踪方法的另一实施例示意图,该方法包括:步骤201、获取摄像机采集的图像序列,将该图像序列作为待匹配图像;并从所述图像序列的第一帧图像中确定该待匹配图像的参考图像。Referring to FIG. 2 , it is a schematic diagram of another embodiment of a target tracking method according to an embodiment of the present invention. The method includes: step 201, acquiring an image sequence captured by a camera, and using the image sequence as an image to be matched; The reference image of the image to be matched is determined in the first frame image of the sequence.
具体的,摄像机采集的图像序列的格式可以是BMP(Bitmap,位图,简称BMP)、TIFF(Tagged Image File Format,标签图像文件格式,简称TIFF)或JPEG(Joint Photographic Experts Group,联合图像专家小组,简称JPEG)等图片格式,也可是MPEG(Moving Pictures Experts Group/Motin Pictures Experts Group,动态图像专家组,简称MPEG)或AVI(Audio Video Interleaved,音频视频交错格式,简称AVI)等视频格式,目标跟踪装置将该图像序列作为待匹配图像,并从图像序列中的第一帧图像中确定参考图像,参考图像中记载有需要被跟踪的目标,假设参考图像的大小为M*N(M,N为像素的个数),待匹配图像的大小为m*n(m,n为像素的个数),则有M<m,N<n。Specifically, the format of the image sequence collected by the camera can be BMP (Bitmap, BMP for short), TIFF (Tagged Image File Format, TIFF for short) or JPEG (Joint Photographic Experts Group, joint image expert group , referred to as JPEG) and other image formats, or MPEG (Moving Pictures Experts Group/Motin Pictures Experts Group, MPEG for short) or AVI (Audio Video Interleaved, audio and video interleaved format, referred to as AVI) and other video formats. The tracking device takes the image sequence as the image to be matched, and determines the reference image from the first frame image in the image sequence, and the target to be tracked is recorded in the reference image, assuming that the size of the reference image is M*N (M, N is the number of pixels), and the size of the image to be matched is m*n (m, n is the number of pixels), then M<m, N<n.
步骤202、采用预置算法提取参考图像中的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪。Step 202, using a preset algorithm to extract corner features in the reference image to obtain a first feature point set of the reference image, the reference image is used to track the target in the current image to be matched.
具体的,目标跟踪装置提取参考图像中的角点特征的算法可以是Harris角点检测算法、FAST角点检测算法、KLT角点检测算法或SUSAN角点检测算法,也可以是其他算法,本发明不作限制,目标跟踪装置提取参考图像的角点特征得到第一特征点集合,该第一特征点集合为一个二值图像。参考图像为记载需要被跟踪的目标的图像,采用该参考图像与待匹配图像进行匹配以实现对目标的跟踪。Specifically, the algorithm for the target tracking device to extract the corner feature in the reference image may be the Harris corner detection algorithm, the FAST corner detection algorithm, the KLT corner detection algorithm or the SUSAN corner detection algorithm, or other algorithms. Without limitation, the target tracking device extracts corner features of the reference image to obtain a first set of feature points, and the first set of feature points is a binary image. The reference image is an image that records the target to be tracked, and the reference image is matched with the image to be matched to realize the tracking of the target.
步骤203、剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,的到第二特征点集合。Step 203, removing outliers in the first set of feature points beyond the scope of the image to be matched, and converting them into a second set of feature points.
具体的,目标跟踪装置获取第一特征点集合中各个角点的坐标,同时获取待匹配图像中各个像素点的坐标,根据两者坐标的关系判断角点是否超出待匹配图像的范围,若为是,则确定该角点为异常点,按照上述方法剔除第一特征点集合中所有的异常点,得到第二特征点集合。Specifically, the target tracking device acquires the coordinates of each corner point in the first feature point set, and at the same time acquires the coordinates of each pixel point in the image to be matched, and judges whether the corner point exceeds the range of the image to be matched according to the relationship between the two coordinates. If yes, the corner point is determined to be an abnormal point, and all abnormal points in the first feature point set are eliminated according to the above method to obtain the second feature point set.
步骤204、在所述待匹配图像中选取一个与所述图像相同大小的待匹配子图。Step 204. Select a subimage to be matched in the image to be matched that has the same size as the image.
具体的,以步骤201中的例子,假设参考图像的大小M*N,待匹配图像为m*n,满足关系M<m,N<n,目标跟踪装置在待匹配图像中选取一个大小为M*N的待匹配子图,待匹配子图的搜索策略一般采用遍历的方法或遗传算法,本发明不作限制。Specifically, taking the example in step 201, assuming that the size of the reference image is M*N, the size of the image to be matched is m*n, satisfying the relationship M<m, N<n, and the target tracking device selects a size M in the image to be matched *N's subgraph to be matched, the search strategy of the subgraph to be matched generally adopts the method of traversal or genetic algorithm, which is not limited in the present invention.
步骤205、采用所述预置算法提取所述待匹配子图的角点特征得到第三特征点集合。Step 205, using the preset algorithm to extract the corner point features of the subgraph to be matched to obtain a third feature point set.
具体的,目标跟踪装置采用与步骤202相同的算法提取待匹配子图的角点特征得到第三特征点集合。Specifically, the object tracking device uses the same algorithm as that in step 202 to extract the corner point features of the subgraph to be matched to obtain the third feature point set.
步骤206、计算所述第二特征点集合与所述第三特征点集合之间的距离。Step 206. Calculate the distance between the second set of feature points and the third set of feature points.
具体的,目标跟踪装置对第一特征点集合采用预置变换算法进行距离变换后得到二值图像A,对第三特征点集合采用相同的预置变换算法进行距离变换后得到二值图像B,该预置变换算法的可以是3-4DT。二值图像A和二值图像B之间的距离计算可以采用欧式距离或非欧式距离,此处以Hausdorff距离为例,Hausdorff距离是一种定义于两个点集上的最大最小(Max-Min)距离,例如计算上述的二值图像A和二值图像B之间的Hausdorff距离,假设A={a1,…,ap},B={b1,…,bq},则这两个点集合之间的Hausdorff距离定义为H(A,B)=max(h(A,B),h(B,A)) (1)Specifically, the target tracking device uses a preset transformation algorithm to perform distance transformation on the first feature point set to obtain a binary image A, and uses the same preset transformation algorithm to perform distance transformation on the third feature point set to obtain a binary image B. The preset transformation algorithm can be 3-4DT. The distance between binary image A and binary image B can be calculated using Euclidean distance or non-Euclidean distance. Here, Hausdorff distance is taken as an example. Hausdorff distance is a maximum-minimum (Max-Min) defined on two point sets. Distance, such as calculating the Hausdorff distance between the above-mentioned binary image A and binary image B, assuming A={a1,...,ap}, B={b1,...,bq}, then between the two point sets The Hausdorff distance is defined as H(A,B)=max(h(A,B),h(B,A)) (1)
h(A,B)=max(a∈A)min(b∈B)‖a-b‖ (2)h(A,B)=max(a∈A)min(b∈B)‖a-b‖ (2)
h(B,A)=max(b∈B)min(a∈A)‖b-a‖ (3)h(B,A)=max(b∈B)min(a∈A)‖b-a‖ (3)
‖·‖是点集A和B点集间的距离范式(如:L2或欧式距离)。‖·‖ is the distance paradigm (such as: L2 or Euclidean distance) between point set A and point set B.
这里,式(1)称为双向Hausdorff距离,是Hausdorff距离的最基本形式;式(2)中的h(A,B)和h(B,A)分别称为从A集合到B集合和从B集合到A集合的单向Hausdorff距离。即h(A,B)实际上首先对点集A中的每个点ai到距离此点ai最近的B集合中点bj之间的距离‖ai-bj‖进行排序,然后取该距离中的最大值作为h(A,B)的值。h(B,A)同理可得,由式(1)知,双向Hausdorff距离H(A,B)是单向距离h(A,B)和h(B,A)两者中的较大者,它度量了两个点集间的最大不匹配程度。Here, formula (1) is called the two-way Hausdorff distance, which is the most basic form of Hausdorff distance; h(A,B) and h(B,A) in formula (2) are called from A set to B set and from One-way Hausdorff distance from set B to set A. That is, h(A,B) actually sorts the distance ‖ai-bj‖ between each point ai in the point set A and the point bj in the B set closest to this point ai, and then takes the distance The maximum value is used as the value of h(A,B). In the same way, h(B,A) can be obtained. According to formula (1), the two-way Hausdorff distance H(A,B) is the larger of the one-way distance h(A,B) and h(B,A). Or, it measures the maximum degree of mismatch between two point sets.
步骤207、判断所述距离是否小于预置阈值。Step 207, judging whether the distance is smaller than a preset threshold.
具体的,目标跟踪装置判断步骤206计算得到的距离是否小于预置阈值,若为是,执行步骤206,若为否,则返回执行步骤204。Specifically, the object tracking device judges whether the distance calculated in step 206 is smaller than a preset threshold, if yes, execute step 206 , and if no, return to execute step 204 .
步骤208、确定所述参考图像与所述待匹配图像中的待匹配子图匹配成功。Step 208, determining that the reference image is successfully matched with the subimage to be matched in the image to be matched.
具体的,目标跟踪装置确定所述待匹配图像中包括需要跟踪的目标,参考图像与待匹配图像匹配成功,并跟踪到待匹配子图中的目标。Specifically, the object tracking device determines that the image to be matched includes an object to be tracked, the reference image is successfully matched with the image to be matched, and tracks the object in the subimage to be matched.
步骤209、删除所述参考图像并将与所述参考图像匹配成功的待匹配子图作为新的参考图像。Step 209, delete the reference image and use the subimage to be matched that successfully matches the reference image as a new reference image.
具体的,将待匹配图像中匹配成功的待匹配子图作为新的参考图像,在对下一帧待匹配图像进行目标跟踪时,采用新的参考图像进行图像匹配。通过对参考图像的动态更新,能调整参考图像中目标的形状和大小,是图像匹配更准确。Specifically, the subimage to be matched that is successfully matched in the image to be matched is used as a new reference image, and the new reference image is used for image matching when performing target tracking on the next frame of image to be matched. By dynamically updating the reference image, the shape and size of the target in the reference image can be adjusted, making image matching more accurate.
实施本发明的实施例,通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。In the embodiment of the present invention, the feature point set of the reference image is obtained by extracting the corner point features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is combined with the feature point set to be matched. The images are image-matched for object tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
参见图3,为本发明实施例的一种目标跟踪装置的结构示意图,以下简称装置1,该装置1包括:Referring to FIG. 3 , it is a schematic structural diagram of a target tracking device according to an embodiment of the present invention, hereinafter referred to as
特征提取模块11,用于采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪。The
具体的,特征提取模块11提取参考图像中的角点特征的算法可以是Harris角点检测算法、FAST角点检测算法、KLT角点检测算法或SUSAN角点检测算法,也可以是其他算法,本发明不作限制,目标跟踪装置提取参考图像的角点特征得到第一特征点集合,该第一特征点集合为一个二值图像。参考图像为记载需要被跟踪的目标的图像,采用该参考图像与待匹配图像进行匹配以实现对目标的跟踪。Specifically, the algorithm for
特征优化模块12,用于剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合。The
具体的,当待匹配图像中的目标的形状不完整时,例如目标发生遮挡或目标在待匹配图像外时,则从参考图像中提取的第一特征点集合中存在不属于待匹配图像范围之内的角点,成为异常点,若把这些异常点参与图像匹配的计算,会导致匹配计算结果无法收敛到正确结果,即参考图像和待匹配图像匹配失败。为了避免这种情况的发生,特征优化模块12获取第一特征点集合中各个角点的坐标,根据角点的坐标和待匹配图像的坐标关系判断角点是否超出待匹配图像范围,若为是,则取得该角点为异常点,按照上述的方法剔除所有的异常点,得到第二特征点集合。Specifically, when the shape of the target in the image to be matched is incomplete, for example, when the target is occluded or the target is outside the image to be matched, there are points that do not belong to the range of the image to be matched in the first set of feature points extracted from the reference image. The corner points inside become abnormal points. If these abnormal points are involved in the calculation of image matching, the matching calculation results will not converge to the correct result, that is, the matching of the reference image and the image to be matched will fail. In order to avoid this situation, the
图像匹配模块13,用于将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。The
具体的,图像匹配模块13在待匹配图像中选取一块与参考图像相同大小的待匹配子图,采用与步特征提取模块11中相同的角点提取算法提取该待匹配子图的角点特征得到第三特征点集合,基于角点特征的匹配有两种方式,一种是用预定的方式描述提取到的参考图像的第二特征点集合和待匹配子图的第三特征点集合,对图像的匹配就转化为两个向量集的匹配。另一种是将提取到的特征点集合以二值图像表示,对二值图像进行距离变换,以距离相似度度量两个特征点集合的相似性。当然也可以采用其他方法,本发明不作限制。Specifically, the
实施本发明的实施例,通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。In the embodiment of the present invention, the feature point set of the reference image is obtained by extracting the corner point features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is combined with the feature point set to be matched. The images are image-matched for object tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
进一步的,参见图4-图6,为本发明实施例的一种目标跟踪装置的另一结构示意图,该装置1除包括上述的特征提取模块11、特征优化模块12和图像匹配模块13外,还包括:Further, referring to FIG. 4-FIG. 6, it is another structural schematic diagram of an object tracking device according to an embodiment of the present invention. In addition to the above-mentioned
模板确定模块14,用于获取摄像机采集的图像序列,将该图像序列作为待匹配图像;并从所述图像序列的第一帧图像中确定该待匹配图像的参考图像。The
具体的,模板确定模块14获取摄像机采集的图像序列的格式可以是BMP、TIFF或JPEG等图片格式,也可是MPEG或AVI等视频格式,模板确定模块14将该图像序列作为待匹配图像,并从图像序列中的第一帧图像中确定参考图像,参考图像中记载有需要被跟踪的目标,假设参考图像的大小为M*N(M,N为像素的个数),待匹配图像的大小为m*n(m,n为像素的个数),则有M<m,N<n。Concretely, the format of the image sequence acquired by the
模板更新模块15,用于删除所述参考图像并将与所述参考模板匹配成功的待匹配子图作为新的参考图像。A
具体的,模板更新模块15将待匹配图像中匹配成功的待匹配子图作为新的参考图像,在对下一帧待匹配图像进行目标跟踪时,采用新的参考图像进行图像匹配。通过对参考图像的动态更新,能调整参考图像中目标的形状和大小,是图像匹配更准确。Specifically, the
其中,特征优化模块12包括:Wherein,
获取单元121,用于获取所述第一特征点集合中每个特征点的坐标;An
判断单元122,用于根据所述特征点集合中每个特征点的坐标判断是否超出所述待匹配图像的范围,若为是,则确定该特征点为异常点;A judging
剔除单元123,用于将所述第一特征点集合中所有的异常点剔除,得到第二特征点集合。The
图像匹配模块13包括:
选取单元131,用于在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图;A
提取单元132,用于采用所述预置算法提取所述待匹配子图的特征得到第三特征点集合;An
计算单元133,用于计算所述第二特征点集合与所述第三特征点集合之间的距离;A
匹配单元134,用于判断所述距离是否小于预置阈值,若为是,则确定所述参考图像与所述待匹配图像中的待匹配子图匹配成功,若为否,则重新执行在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图的步骤。The
实施本发明的实施例,通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。In the embodiment of the present invention, the feature point set of the reference image is obtained by extracting the corner point features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is combined with the feature point set to be matched. The images are image-matched for object tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
参见图7,为本发明实施例的一种目标跟踪装置的又一结构示意图,该目标跟踪装置1包括处理器61、存储器62、输入装置63和输出装置64,目标跟踪装置1中的处理器61的数量可以是一个或多个,图7以一个处理器为例。本发明的一些实施例中,处理器61、存储器62、输入装置63和输出装置64可通过总线或其他方式连接,图7中以总线连接为例。Referring to FIG. 7 , it is another structural diagram of a target tracking device according to an embodiment of the present invention. The
其中,存储器62中存储一组程序代码,且处理器61用于调用存储器62中存储的程序代码,用于执行以下操作:Wherein, a set of program codes are stored in the
采用预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合,所述参考图像用于对当前的待匹配图像中的目标进行跟踪;Extracting the corner features of the reference image by using a preset algorithm to obtain a first set of feature points of the reference image, the reference image is used to track the target in the current image to be matched;
剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合;Eliminating abnormal points in the first set of feature points beyond the range of the image to be matched to obtain a second set of feature points;
将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪。performing image matching on the second feature point set and the image to be matched, and completing target tracking on the image to be matched according to the matching result.
进一步的,在本发明的一些实施例中,处理器61采用包括Harris角点检测算法、FAST角点检测算法、KLT角点检测算法或SUSAN角点检测算法的预置算法提取参考图像的角点特征得到所述参考图像的第一特征点集合。Further, in some embodiments of the present invention, the
优选的,在本发明的一些实施例中,处理器61还用于执行:Preferably, in some embodiments of the present invention, the
获取摄像机采集的图像序列,将该图像序列作为待匹配图像;Obtain the image sequence collected by the camera, and use the image sequence as the image to be matched;
并从所述图像序列的第一帧图像中确定该待匹配图像的参考图像。And determine the reference image of the image to be matched from the first frame image of the image sequence.
优选的,在本发明的一些实施例中,处理器61执行所述剔除所述第一特征点集合中超出所述待匹配图像范围的异常点,得到第二特征点集合的步骤包括:Preferably, in some embodiments of the present invention, the
获取所述第一特征点集合中每个特征点的坐标;Obtain the coordinates of each feature point in the first set of feature points;
根据所述特征点集合中每个特征点的坐标判断是否超出所述待匹配图像的范围,若为是,则确定该特征点为异常点;Judging according to the coordinates of each feature point in the feature point set whether it exceeds the scope of the image to be matched, if yes, then determining that the feature point is an abnormal point;
将所述第一特征点集合中所有的异常点剔除,得到第二特征点集合。All abnormal points in the first set of feature points are eliminated to obtain a second set of feature points.
优选的,在本发明的一些实施例中,处理器61执行所述将所述第二特征点集合与所述待匹配图像进行图像匹配,根据匹配结果完成所述待匹配图像的目标跟踪的步骤包括:Preferably, in some embodiments of the present invention, the
在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图;Selecting a subimage to be matched with the same size as the reference image in the image to be matched;
采用所述预置算法提所述待匹配子图的特征得到第三特征点集合;Using the preset algorithm to extract the features of the subgraph to be matched to obtain a third set of feature points;
计算所述第二特征点集合与所述第三特征点集合之间的距离;calculating the distance between the second set of feature points and the third set of feature points;
判断所述距离是否小于预置阈值,若为是,则确定所述参考图像与所述待匹配图像中的待匹配子图匹配成功,若为否,则重新执行在所述待匹配图像中选取一个与所述参考图像相同大小的待匹配子图的步骤。Judging whether the distance is smaller than a preset threshold, if yes, then determining that the reference image is successfully matched with the subimage to be matched in the image to be matched, if not, re-executing the selection process in the image to be matched A step to match subimages of the same size as the reference image.
优选的,在本发明的一些实施例中,处理器61还用于执行删除所述参考图像并将与所述参考模板匹配成功的待匹配子图作为新的参考图像。Preferably, in some embodiments of the present invention, the
实施本发明的实施例,通过提取参考图像的角点特征得到该参考图像的特征点集合,剔除特征点集合中超出待匹配图像范围的异常点,将剔除过异常点的特征点集合与待匹配图像进行图像匹配,以实现目标跟踪。采用角点特征进行图形匹配在保留图像图形重要特征的同时可以减少参与计算的数据量,角点特征具有良好的辨识性和稳定性,大大提高匹配的速度。同时本方法对参考图像中的特征点集合进行优化,剔除超范围的角点,解决了现有技术中特征点超范围无法匹配的不足,提高了匹配的可靠性。In the embodiment of the present invention, the feature point set of the reference image is obtained by extracting the corner point features of the reference image, the abnormal points in the feature point set that exceed the range of the image to be matched are eliminated, and the feature point set that has eliminated the abnormal point is combined with the feature point set to be matched. The images are image-matched for object tracking. The use of corner features for graphic matching can reduce the amount of data involved in the calculation while retaining the important features of image graphics. The corner features have good recognition and stability, which greatly improves the matching speed. At the same time, the method optimizes the set of feature points in the reference image and eliminates out-of-range corner points, which solves the problem that the out-of-range feature points cannot be matched in the prior art, and improves the reliability of matching.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.
以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。What is disclosed above is only a preferred embodiment of the present invention, and of course it cannot limit the scope of rights of the present invention. Those of ordinary skill in the art can understand all or part of the process for realizing the above embodiments, and according to the rights of the present invention The equivalent changes required still belong to the scope covered by the invention.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310183174.1A CN103279952B (en) | 2013-05-17 | 2013-05-17 | A kind of method for tracking target and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310183174.1A CN103279952B (en) | 2013-05-17 | 2013-05-17 | A kind of method for tracking target and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103279952A true CN103279952A (en) | 2013-09-04 |
CN103279952B CN103279952B (en) | 2017-10-17 |
Family
ID=49062459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310183174.1A Active CN103279952B (en) | 2013-05-17 | 2013-05-17 | A kind of method for tracking target and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103279952B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104715470A (en) * | 2013-12-13 | 2015-06-17 | 南京理工大学 | Klt corner point detecting device and method |
CN105469427A (en) * | 2015-11-26 | 2016-04-06 | 河海大学 | Target tracking method applied to videos |
CN106250863A (en) * | 2016-08-09 | 2016-12-21 | 北京旷视科技有限公司 | object tracking method and device |
CN106960179A (en) * | 2017-02-24 | 2017-07-18 | 北京交通大学 | Rail line Environmental security intelligent monitoring method and device |
WO2017167159A1 (en) * | 2016-03-29 | 2017-10-05 | 中兴通讯股份有限公司 | Image positioning method and device |
CN107273801A (en) * | 2017-05-15 | 2017-10-20 | 南京邮电大学 | A kind of method of video multi-target tracing detection abnormity point |
CN107564033A (en) * | 2017-07-26 | 2018-01-09 | 北京臻迪科技股份有限公司 | A kind of tracking of submarine target, underwater installation and wearable device |
CN108021921A (en) * | 2017-11-23 | 2018-05-11 | 塔普翊海(上海)智能科技有限公司 | Image characteristic point extraction system and its application |
CN109685830A (en) * | 2018-12-20 | 2019-04-26 | 浙江大华技术股份有限公司 | Method for tracking target, device and equipment and computer storage medium |
CN109919971A (en) * | 2017-12-13 | 2019-06-21 | 北京金山云网络技术有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
CN110148178A (en) * | 2018-06-19 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Camera localization method, device, terminal and storage medium |
CN113409373A (en) * | 2021-06-25 | 2021-09-17 | 浙江商汤科技开发有限公司 | Image processing method, related terminal, device and storage medium |
CN114926508A (en) * | 2022-07-21 | 2022-08-19 | 深圳市海清视讯科技有限公司 | Method, device, equipment and storage medium for determining visual field boundary |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109567600B (en) * | 2018-12-05 | 2020-12-01 | 江西书源科技有限公司 | Automatic accessory identification method for household water purifier |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101226592A (en) * | 2008-02-21 | 2008-07-23 | 上海交通大学 | Part-Based Object Tracking Method |
CN101399969A (en) * | 2007-09-28 | 2009-04-01 | 三星电子株式会社 | System, device and method for moving target detection and tracking based on moving camera |
JP2010002976A (en) * | 2008-06-18 | 2010-01-07 | Secom Co Ltd | Image monitoring device |
CN101840507A (en) * | 2010-04-09 | 2010-09-22 | 江苏东大金智建筑智能化系统工程有限公司 | Target tracking method based on character feature invariant and graph theory clustering |
CN102750691A (en) * | 2012-05-29 | 2012-10-24 | 重庆大学 | Corner pair-based image registration method for Cauchy-Schwarz (CS) divergence matching |
-
2013
- 2013-05-17 CN CN201310183174.1A patent/CN103279952B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101399969A (en) * | 2007-09-28 | 2009-04-01 | 三星电子株式会社 | System, device and method for moving target detection and tracking based on moving camera |
CN101226592A (en) * | 2008-02-21 | 2008-07-23 | 上海交通大学 | Part-Based Object Tracking Method |
JP2010002976A (en) * | 2008-06-18 | 2010-01-07 | Secom Co Ltd | Image monitoring device |
CN101840507A (en) * | 2010-04-09 | 2010-09-22 | 江苏东大金智建筑智能化系统工程有限公司 | Target tracking method based on character feature invariant and graph theory clustering |
CN102750691A (en) * | 2012-05-29 | 2012-10-24 | 重庆大学 | Corner pair-based image registration method for Cauchy-Schwarz (CS) divergence matching |
Non-Patent Citations (3)
Title |
---|
冯增光,张炯,宁纪锋,颜永丰: "基于角点检测的实时目标跟踪方法", 《计算机工程与设计》 * |
汪颖进: "目标跟踪过程中的遮挡问题研究", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》 * |
罗刚,张云峰: "应用角点匹配实现目标跟踪", 《中国光学与应用光学》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104715470B (en) * | 2013-12-13 | 2017-09-22 | 南京理工大学 | A kind of klt Corner Detections device and method |
CN104715470A (en) * | 2013-12-13 | 2015-06-17 | 南京理工大学 | Klt corner point detecting device and method |
CN105469427A (en) * | 2015-11-26 | 2016-04-06 | 河海大学 | Target tracking method applied to videos |
CN105469427B (en) * | 2015-11-26 | 2018-06-19 | 河海大学 | One kind is for method for tracking target in video |
WO2017167159A1 (en) * | 2016-03-29 | 2017-10-05 | 中兴通讯股份有限公司 | Image positioning method and device |
CN106250863B (en) * | 2016-08-09 | 2019-07-26 | 北京旷视科技有限公司 | Object tracking method and device |
CN106250863A (en) * | 2016-08-09 | 2016-12-21 | 北京旷视科技有限公司 | object tracking method and device |
CN106960179A (en) * | 2017-02-24 | 2017-07-18 | 北京交通大学 | Rail line Environmental security intelligent monitoring method and device |
CN107273801A (en) * | 2017-05-15 | 2017-10-20 | 南京邮电大学 | A kind of method of video multi-target tracing detection abnormity point |
CN107273801B (en) * | 2017-05-15 | 2021-11-30 | 南京邮电大学 | Method for detecting abnormal points by video multi-target tracking |
CN107564033A (en) * | 2017-07-26 | 2018-01-09 | 北京臻迪科技股份有限公司 | A kind of tracking of submarine target, underwater installation and wearable device |
CN108021921A (en) * | 2017-11-23 | 2018-05-11 | 塔普翊海(上海)智能科技有限公司 | Image characteristic point extraction system and its application |
CN109919971B (en) * | 2017-12-13 | 2021-07-20 | 北京金山云网络技术有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
CN109919971A (en) * | 2017-12-13 | 2019-06-21 | 北京金山云网络技术有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
CN110148178A (en) * | 2018-06-19 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Camera localization method, device, terminal and storage medium |
US11210810B2 (en) | 2018-06-19 | 2021-12-28 | Tencent Technology (Shenzhen) Company Limited | Camera localization method and apparatus, terminal, and storage medium |
CN110148178B (en) * | 2018-06-19 | 2022-02-22 | 腾讯科技(深圳)有限公司 | Camera positioning method, device, terminal and storage medium |
CN109685830B (en) * | 2018-12-20 | 2021-06-15 | 浙江大华技术股份有限公司 | Target tracking method, device and equipment and computer storage medium |
CN109685830A (en) * | 2018-12-20 | 2019-04-26 | 浙江大华技术股份有限公司 | Method for tracking target, device and equipment and computer storage medium |
CN113409373A (en) * | 2021-06-25 | 2021-09-17 | 浙江商汤科技开发有限公司 | Image processing method, related terminal, device and storage medium |
CN114926508A (en) * | 2022-07-21 | 2022-08-19 | 深圳市海清视讯科技有限公司 | Method, device, equipment and storage medium for determining visual field boundary |
CN114926508B (en) * | 2022-07-21 | 2022-11-25 | 深圳市海清视讯科技有限公司 | Visual field boundary determining method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103279952B (en) | 2017-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103279952B (en) | A kind of method for tracking target and device | |
CN108960211B (en) | Multi-target human body posture detection method and system | |
CN103578116B (en) | For tracking the apparatus and method of object | |
WO2019042426A1 (en) | Augmented reality scene processing method and apparatus, and computer storage medium | |
EP3709266A1 (en) | Human-tracking methods, apparatuses, systems, and storage media | |
EP2591460A1 (en) | Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation | |
CN107248174A (en) | A kind of method for tracking target based on TLD algorithms | |
WO2018082308A1 (en) | Image processing method and terminal | |
JP2025504056A (en) | Facial pose estimation method, device, electronic device, and storage medium | |
JP2016009448A (en) | Determination device, determination method, and determination program | |
WO2022237048A1 (en) | Pose acquisition method and apparatus, and electronic device, storage medium and program | |
US9392146B2 (en) | Apparatus and method for extracting object | |
CN109447022A (en) | A kind of lens type recognition methods and device | |
CN113228105A (en) | Image processing method and device and electronic equipment | |
US20220207261A1 (en) | Method and apparatus for detecting associated objects | |
JP5848665B2 (en) | Moving object motion vector detection apparatus, moving object motion vector detection method, and program | |
US20240029398A1 (en) | Method and device for target tracking, and storage medium | |
CN112037262A (en) | Target tracking method and device and electronic equipment | |
CN114998347B (en) | Semiconductor panel corner positioning method and device | |
CN111079624A (en) | Method, device, electronic equipment and medium for collecting sample information | |
CN112364835B (en) | Video information framing method, device, equipment and storage medium | |
CN113870307B (en) | Target detection method and device based on inter-frame information | |
CN115294358A (en) | Feature point extraction method and device, computer equipment and readable storage medium | |
CN117474962A (en) | Optimization method, device, electronic equipment and storage medium for depth estimation model | |
JP5643147B2 (en) | Motion vector detection apparatus, motion vector detection method, and motion vector detection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20191218 Address after: No.1, floor 3, No.319, zhanggongshan Road, Yuhui District, Bengbu City, Anhui Province Patentee after: Bengbu guijiu Intellectual Property Service Co., Ltd Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: Huawei Technologies Co., Ltd. |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20201021 Address after: C 013, C 015, C 016, C 020, C 021, C 022, 3 / F, e-commerce Industrial Park, Nantong home textile city, Jinchuan Avenue, Chuanjiang Town, Tongzhou District, Nantong City, Jiangsu Province 226000 Patentee after: Ruide Yinfang (Nantong) Information Technology Co., Ltd Address before: No.1, floor 3, No.319, zhanggongshan Road, Yuhui District, Bengbu City, Anhui Province Patentee before: Bengbu guijiu Intellectual Property Service Co.,Ltd. |