CN108492324A - Aircraft method for tracing based on fully-connected network and Kalman filter - Google Patents
Aircraft method for tracing based on fully-connected network and Kalman filter Download PDFInfo
- Publication number
- CN108492324A CN108492324A CN201810079824.0A CN201810079824A CN108492324A CN 108492324 A CN108492324 A CN 108492324A CN 201810079824 A CN201810079824 A CN 201810079824A CN 108492324 A CN108492324 A CN 108492324A
- Authority
- CN
- China
- Prior art keywords
- bounding box
- aircraft
- kalman filter
- state
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 239000013598 vector Substances 0.000 claims abstract description 19
- 238000012937 correction Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 3
- 206010034719 Personality change Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
Description
技术领域technical field
本发明属计算机视觉领域,涉及视频中飞机的深度学习追踪方法。The invention belongs to the field of computer vision and relates to a deep learning tracking method for aircraft in videos.
背景技术Background technique
飞机跟踪是航空安全等领域的一项重要技术,在军事侦察中引入科技手段来加强安全措施也逐渐受到国家重视。目标跟踪作为计算机视觉中的一个重要领域,可以实现对获取的视频影像中的飞机进行检查乃至跟踪的功能。Aircraft tracking is an important technology in aviation security and other fields, and the introduction of scientific and technological means in military reconnaissance to strengthen security measures has gradually attracted national attention. As an important field in computer vision, target tracking can realize the function of checking and even tracking the aircraft in the acquired video images.
近年来,随着深度学习的发展,机器学习算法逐渐应用到各种视觉领域,基于深度学习的目标检测和跟踪技术也得到了迅速发展。与传统方法相比,其追踪性能得到了很大提高。流行的物体检测策略,包括区域建议和区域分步两类。然而,飞机跟踪的准确性主要受复杂环境条件的影响,视觉跟踪算法仍存在一些挑战性的问题,如突发性运动、姿态变化、变形、遮挡、背景杂波、光照或视点变化等都会导致跟踪的准确度降低,甚至导致跟踪失败。目前尚无有效的算法来解决飞机追踪的问题。In recent years, with the development of deep learning, machine learning algorithms have been gradually applied to various vision fields, and object detection and tracking technology based on deep learning has also developed rapidly. Compared with traditional methods, its tracking performance has been greatly improved. Popular object detection strategies include region proposal and region stepwise. However, the accuracy of aircraft tracking is mainly affected by complex environmental conditions, and there are still some challenging problems in visual tracking algorithms, such as sudden movement, attitude changes, deformation, occlusion, background clutter, lighting or viewpoint changes, etc. The accuracy of tracking decreases, and even leads to tracking failure. Currently there is no effective algorithm to solve the problem of aircraft tracking.
发明内容Contents of the invention
本发明的目的在于建立一种更为准确的飞机追踪方法。本发明提出的飞机追踪方法,包括以R-FCN为基础的检测模型,基于卡尔曼滤波器的状态估计模型,飞机运动轨迹纠正模块三个主要部分。技术方案如下:The purpose of the present invention is to establish a more accurate aircraft tracking method. The aircraft tracking method proposed by the invention includes three main parts: a detection model based on R-FCN, a state estimation model based on a Kalman filter, and an aircraft trajectory correction module. The technical solution is as follows:
一种基于全连接网络和卡尔曼滤波器的飞机追踪方法,包括下列步骤:An aircraft tracking method based on a fully connected network and a Kalman filter, comprising the following steps:
第一步:利用全连接网络R-FCN对视频逐帧进行检测,获得前一帧图像的boundingbox,以供轨迹校正使用;The first step: use the fully connected network R-FCN to detect the video frame by frame, and obtain the boundingbox of the previous frame image for trajectory correction;
第二步:构建状态向量,描述飞机运动轨迹,状态向量既要表示出目标飞机中心点的位置,又要展示出bounding box的大小和纵横比;Step 2: Construct a state vector to describe the trajectory of the aircraft. The state vector should not only indicate the position of the center point of the target aircraft, but also show the size and aspect ratio of the bounding box;
第三步:为了避免目标检测在某一帧上检测失败导致的目标漂移,将卡尔曼滤波器和扩展卡尔曼滤波器相结合,从构建的状态向量中分离出子向量来描述运动目标;具体方法如下:The third step: In order to avoid the target drift caused by the failure of target detection on a certain frame, the Kalman filter and the extended Kalman filter are combined to separate the sub-vectors from the constructed state vector to describe the moving target; specifically Methods as below:
(1)卡尔曼滤波器处理线性部分:状态向量中表示飞机中心点位置的子向量由线性模型近似,然后由预测结果和当前观测结果的不确定性计算卡尔曼增益,对预测结果和观测结果做加权平均,得到当前时刻的状态估计和本次状态估计的不确定性;(1) The Kalman filter processes the linear part: the sub-vector representing the position of the center point of the aircraft in the state vector is approximated by a linear model, and then the Kalman gain is calculated from the uncertainty of the prediction result and the current observation result, and the prediction result and the observation result Do a weighted average to get the state estimate at the current moment and the uncertainty of this state estimate;
(2)扩展卡尔曼滤波器用于对不适于线性模型的非线性部分进行拟合:以和(1)中相同的方式创建可以表示bounding box大小和纵横比的状态子向量,但涉及到的状态矩阵和映射矩阵不再是常数矩阵,得到非线性部分在当前时刻的状态估计和本次状态估计的不确定性;(2) The extended Kalman filter is used to fit the nonlinear part that is not suitable for the linear model: create a state subvector that can represent the size and aspect ratio of the bounding box in the same way as in (1), but the state involved The matrix and mapping matrix are no longer constant matrices, and the state estimation of the nonlinear part at the current moment and the uncertainty of this state estimation are obtained;
(3)将非线性部分加入线性系统中,描述飞机的运动状态;(3) Add the nonlinear part to the linear system to describe the motion state of the aircraft;
第四步:当检测结果偏差较大时,根据目标对象的大小限制出有效范围来提高检测速度,纠正相邻帧中bounding box的位置,从而实现对运动轨迹的校正,若bounding box与飞机所在窗口的重叠度IOU值大于预先定义好的阈值T,则根据前一帧的bounding box修改当前检测框的位置和大小;否则便将此目标作为中心,划出bounding box输入检测网络进行训练,校正公式如下:Step 4: When the deviation of the detection result is large, limit the effective range according to the size of the target object to improve the detection speed, and correct the position of the bounding box in the adjacent frame, so as to realize the correction of the motion trajectory. If the bounding box and the aircraft are located If the window overlap IOU value is greater than the predefined threshold T, the position and size of the current detection frame will be modified according to the bounding box of the previous frame; The formula is as follows:
其中,δa,δb分别表示前后帧之间检测结果的置信度,其值越高,说明模型越精确;wa,wb,wc分别表示前后帧及校正后bounding box的宽度,ha,hb,hc分别表示前后帧及校正后bounding box的高度;(xa,ya),(xb,yb),(xc,yc)分别表示前后帧及校正后bounding box 中心点的水平和垂直坐标;Among them, δ a , δ b represent the confidence of the detection results between the front and rear frames respectively, and the higher the value, the more accurate the model is; w a , w b , w c represent the width of the front and rear frames and the corrected bounding box respectively, h a , h b , h c represent the heights of the front and rear frames and the corrected bounding box respectively; (x a , y a ), (x b , y b ), (x c , y c ) represent the front and rear frames and the corrected bounding box respectively The horizontal and vertical coordinates of the center point of the box;
第五步:搜集包含飞机运动的视频,统一处理成固定长度的视频,共同组成训练数据库,其中的视频随机分为两部分,80%作为训练集输入R-FCN,根据训练得到的模型对剩下20%视频做预测,得到飞机追踪结果。Step 5: Collect videos containing aircraft movement, process them into fixed-length videos, and form a training database together. The videos are randomly divided into two parts, and 80% of them are input into R-FCN as a training set. Forecast the next 20% of the video and get the aircraft tracking results.
本发明所提出的特定飞机跟踪算法,以R-FCN和卡尔曼滤波器为基础实现。R-FCN是检测模型,用来得到飞机的位置信息。为了减少检测时间,我们还根据前一帧boundingbox 的位置,在每帧中裁剪出一个特定区域。此外,还可依据目标的大小改变检测区域的尺寸。卡尔曼滤波器作为评价模型来调整预测的运动痕迹。当相邻帧的检测差异较大时,为提高检测的确定率,下一帧的bounding box可根据检测结果进行调整。因此,该方法能够准确检测和跟踪飞机的运动轨迹。The specific aircraft tracking algorithm proposed by the invention is realized on the basis of R-FCN and Kalman filter. R-FCN is a detection model used to obtain aircraft position information. In order to reduce the detection time, we also crop a specific region in each frame according to the position of the boundingbox in the previous frame. In addition, the size of the detection area can also be changed according to the size of the target. A Kalman filter is used as an evaluation model to adjust the predicted motion traces. When the detection difference of adjacent frames is large, in order to improve the certainty rate of detection, the bounding box of the next frame can be adjusted according to the detection result. Therefore, the method can accurately detect and track the trajectory of the aircraft.
附图说明Description of drawings
图1:流程图Figure 1: Flowchart
图2:校正方法示意图Figure 2: Schematic diagram of the calibration method
具体实施方式Detailed ways
本发明提供一种以R-FCN[1]和卡尔曼滤波器[2,3]为基础的飞机追踪方法,包括以R-FCN 为基础的检测模型,基于KF的状态估计模型,飞机运动轨迹纠正模块三个主要部分。具体可以表示为以下步骤:The present invention provides an aircraft tracking method based on R-FCN [1] and Kalman filter [2, 3], including a detection model based on R-FCN, a state estimation model based on KF, and an aircraft trajectory There are three main parts of the correction module. Specifically, it can be expressed as the following steps:
第一步:通过R-FCN获得前一帧的包围盒(bounding box)。Step 1: Obtain the bounding box of the previous frame through R-FCN.
具体方法如下:The specific method is as follows:
(1)根据R-FCN原理,通过创建空间敏感分数图来编码感兴趣区域(Region ofInterest, ROI)的相对空间位置信息。本发明借助包围盒回归方法,为每个ROI产生一个4维向量 v=(vx,vy,vw,vh),用于后续计算包围盒。其中,vx,vy表示中心点的x,y坐标,vw,vh代表包围盒的宽度和高度。(1) According to the R-FCN principle, the relative spatial position information of the Region of Interest (ROI) is encoded by creating a spatially sensitive score map. The present invention generates a 4-dimensional vector v=(v x , v y , v w , v h ) for each ROI by means of the bounding box regression method, which is used for subsequent calculation of the bounding box. Among them, v x , v y represent the x, y coordinates of the center point, v w , v h represent the width and height of the bounding box.
(2)将ROI划分为左上、左下、右上、右下四个字区域,这些子区域便作为分数图。则包围盒顶点可分别表示为Btl,Btr,Bbl,Bbt,与vx,vy,vw,vh的关系表示如下:(2) Divide the ROI into four character areas: upper left, lower left, upper right, and lower right, and these sub-areas are used as score maps. Then the vertices of the bounding box can be represented as B tl , B tr , B bl , B bt , and the relationship with v x , v y , v w , v h is expressed as follows:
第二步:构建状态向量描述飞机运动轨迹。Step 2: Build the state vector Describe the trajectory of the aircraft.
其中,表示目标中心点的位置,表示包围盒的比例和纵横比,黑点表示衍生形式。此步骤主要处理线性部分,具体方法如下:in, Indicates the position of the center point of the target, Indicates the scale and aspect ratio of the bounding box, and black dots indicate derivative forms. This step mainly deals with the linear part, the specific method is as follows:
(1)由线性模型近似,从构建的状态向量中分离出子向量来描述运动目标。(1) Approximate by a linear model, separate subvectors from the constructed state vector to describe the goals of the exercise.
xk=Axk-1+Buk+wk (2)x k =Ax k-1 +Bu k +w k (2)
zk=Hxk+vk (3)z k =Hx k +v k (3)
其中,x表示系统的状态向量,z为观测值,A是状态转移矩阵,B和u构成的控制部分在本发明所应用的非可控系统中可以忽略。H是观测模型,即观测矩阵,可将真实状态映射到观測空间。w和v分别表示状态更新和观测过程中的噪声。之前研究表明,上述噪声服从高斯分布。公式(2)称为状态方程,公式(3)称为观测方程,下标k表示在k时刻的值。Among them, x represents the state vector of the system, z is the observed value, A is the state transition matrix, and the control part composed of B and u can be ignored in the non-controllable system applied in the present invention. H is the observation model, that is, the observation matrix, which can map the real state to the observation space. w and v denote the noise in the state update and observation process, respectively. Previous studies have shown that the above noise obeys a Gaussian distribution. Formula (2) is called the state equation, formula (3) is called the observation equation, and the subscript k represents the value at time k.
(2)由KF原理可知:(2) According to the KF principle:
其中,P是预测值与真实值之间的误差协方差矩阵,用来表示预测结果的不确定性,Q 为预测过程中增加的新不确定性。公式(4)表示由上一时刻的状态加上外界的输入来预测当前状态。公式(5)表示之前存在的不确定性,在预测过程中又增加了新的不确定性Q。Among them, P is the error covariance matrix between the predicted value and the real value, which is used to represent the uncertainty of the prediction result, and Q is the new uncertainty added in the prediction process. Formula (4) indicates that the current state is predicted from the state at the previous moment plus external input. Formula (5) represents the uncertainty that existed before, and a new uncertainty Q is added in the forecasting process.
(3)计算卡尔曼增益K和第k次预测值可由以下公式计算得到:(3) Calculate the Kalman gain K and the kth predicted value It can be calculated by the following formula:
公式(6)表示由预测结果的不确定性和观测结果的不确定性R来计算卡尔曼增益 (权重)K。公式(7)表示对预测结果和观测结果做加权平均,得到当前时刻的状态估计。Equation (6) expresses the uncertainty of the predicted results by and the uncertainty R of the observation results to calculate the Kalman gain (weight) K. Formula (7) expresses the weighted average of the prediction results and observation results to obtain the state estimation at the current moment.
(4)更新,表示出本次状态估计的不确定性。(4) update , which represents the uncertainty of this state estimation.
第三步:采用EKF对s和r不适于线性模型的非线性部分进行拟合。以和第三步中相同的方式创建子向量但状态矩阵A和映射矩阵H不再是常数矩阵,它们用下面的公式表示:The third step: use EKF to fit the nonlinear part of s and r which is not suitable for the linear model. Create the subvectors in the same way as in the third step But the state matrix A and the mapping matrix H are no longer constant matrices, they are expressed by the following formula:
其中,矩阵Fk,Hk由雅可比(Jacobian)矩阵衍生而得,二者分别与KF中的A,H 相对应。Wherein, the matrices F k and H k are derived from the Jacobian matrix, and they correspond to A and H in KF respectively.
第四步:将非线性部分加入线性系统中,即可描述飞机的运动状态。Step 4: Add the nonlinear part to the linear system to describe the motion state of the aircraft.
第五步:对运动轨迹进行校正。判断此包围盒是否能正确检测出运动中的飞机。若包围盒与飞机所在窗口的重叠度(Intersection over Union,IOU)的值大于预先定义好的阈值T,则根据前一帧包围盒修改当前检测框的位置和大小。否则便将此目标作为中心,划出包围盒输入检测网络进行训练。具体校正公式如下:Step 5: Correct the motion track. Determine whether the bounding box can correctly detect the moving aircraft. If the value of the overlap (Intersection over Union, IOU) between the bounding box and the window where the aircraft is located is greater than the predefined threshold T, the position and size of the current detection frame are modified according to the bounding box of the previous frame. Otherwise, the target is used as the center, and the bounding box is drawn to input the detection network for training. The specific correction formula is as follows:
其中,δa,δb分别表示前后帧之间检测结果的置信度,其值越高,说明模型越精确。wa,wb,wc分别表示前后帧及校正后包围盒的宽度,ha,hb,hc分别表示前后帧及校正后包围盒的高度。(xa,ya),(xb,yb),(xc,yc)分别表示前后帧及校正后包围盒中心点的水平和垂直坐标。Among them, δ a and δ b represent the confidence of the detection results between the previous and subsequent frames respectively, and the higher the value, the more accurate the model is. w a , w b , w c represent the widths of the front and rear frames and the corrected bounding box, respectively, and h a , h b , h c represent the heights of the front and rear frames and the corrected bounding box, respectively. (x a , y a ), (x b , y b ), (x c , y c ) represent the horizontal and vertical coordinates of the front and back frames and the center point of the corrected bounding box, respectively.
第六步:搜集包含飞机运动的视频,为方便训练,统一处理成固定长度的视频,共同组成训练数据库。其中的视频随机分为两部分,80%作为训练集输入R-FCN,根据训练得到的模型对剩下20%视频做预测,得到飞机追踪结果。Step 6: Collect videos containing aircraft movements, and process them into fixed-length videos to form a training database for the convenience of training. The videos are randomly divided into two parts, 80% of which are input into R-FCN as the training set, and the remaining 20% of the videos are predicted according to the trained model to obtain the aircraft tracking results.
参考文献:references:
[1]Dai J,Li Y,He K,Sun J(2016)R-fcn:Object detection via region-basedfully convolutional networks.In:Advances in Neural Information ProcessingSystems,pp 379C387.[1]Dai J, Li Y, He K, Sun J(2016) R-fcn: Object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, pp 379C387.
[2]Kalman RE,et al(1960)A new approach to linear filtering andprediction problems. Journal of basic Engineering 82(1):35C45.[2] Kalman RE, et al(1960) A new approach to linear filtering and prediction problems. Journal of basic Engineering 82(1):35C45.
[3]Kalman RE,Bucy RS(1961)New results in linear filtering andprediction theory.Journal of basic engineering 83(1):95C108。[3] Kalman RE, Bucy RS (1961) New results in linear filtering and prediction theory. Journal of basic engineering 83(1):95C108.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079824.0A CN108492324B (en) | 2018-01-27 | 2018-01-27 | Airplane tracking method based on full-connection network and Kalman filter |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079824.0A CN108492324B (en) | 2018-01-27 | 2018-01-27 | Airplane tracking method based on full-connection network and Kalman filter |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108492324A true CN108492324A (en) | 2018-09-04 |
CN108492324B CN108492324B (en) | 2021-05-11 |
Family
ID=63343824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810079824.0A Active CN108492324B (en) | 2018-01-27 | 2018-01-27 | Airplane tracking method based on full-connection network and Kalman filter |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108492324B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684991A (en) * | 2018-12-24 | 2019-04-26 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109816071A (en) * | 2019-02-12 | 2019-05-28 | 河南工程学院 | An RFID-based indoor target tracking method |
CN110033050A (en) * | 2019-04-18 | 2019-07-19 | 杭州电子科技大学 | A kind of water surface unmanned boat real-time target detection calculation method |
CN110070565A (en) * | 2019-03-12 | 2019-07-30 | 杭州电子科技大学 | A kind of ship trajectory predictions method based on image superposition |
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN112464886A (en) * | 2020-12-14 | 2021-03-09 | 上海交通大学 | Aircraft identification tracking method |
CN115685991A (en) * | 2022-09-27 | 2023-02-03 | 珠海格力智能装备有限公司 | Deviation rectifying method for traveling path |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110288772A1 (en) * | 2010-05-19 | 2011-11-24 | Denso Corporation | Current position detector for vehicle |
CN103064086A (en) * | 2012-11-04 | 2013-04-24 | 北京工业大学 | Vehicle tracking method based on depth information |
CN103716867A (en) * | 2013-10-25 | 2014-04-09 | 华南理工大学 | Wireless sensor network multiple target real-time tracking system based on event drive |
CN104951084A (en) * | 2015-07-30 | 2015-09-30 | 京东方科技集团股份有限公司 | Eye-tracking method and device |
CN106780542A (en) * | 2016-12-29 | 2017-05-31 | 北京理工大学 | A kind of machine fish tracking of the Camshift based on embedded Kalman filter |
-
2018
- 2018-01-27 CN CN201810079824.0A patent/CN108492324B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110288772A1 (en) * | 2010-05-19 | 2011-11-24 | Denso Corporation | Current position detector for vehicle |
CN103064086A (en) * | 2012-11-04 | 2013-04-24 | 北京工业大学 | Vehicle tracking method based on depth information |
CN103716867A (en) * | 2013-10-25 | 2014-04-09 | 华南理工大学 | Wireless sensor network multiple target real-time tracking system based on event drive |
CN104951084A (en) * | 2015-07-30 | 2015-09-30 | 京东方科技集团股份有限公司 | Eye-tracking method and device |
CN106780542A (en) * | 2016-12-29 | 2017-05-31 | 北京理工大学 | A kind of machine fish tracking of the Camshift based on embedded Kalman filter |
Non-Patent Citations (6)
Title |
---|
DAN SIMON: ""Training radial basis neural networks with the extended Kalman filter"", 《NEUROCOMPUTING》 * |
史阳春 等: ""一种基于神经网络的干扰抑制系统"", 《武汉大学学报(工学版)》 * |
唐新星 等: ""基于扩展卡尔曼滤波算法的RBF神经网络主动视觉跟踪"", 《制造业自动化》 * |
曲仕茹 等: ""采用Kalman_BP神经网络的视频序列多目标检测与跟踪"", 《红外与激光工程》 * |
王化明 等: ""基于CMAC神经网络和Kalman滤波器的三维视觉跟踪"", 《东南大学学报》 * |
陈玲玲 等: ""融合卡尔曼滤波与TLD算法的目标跟踪"", 《2015 27TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684991A (en) * | 2018-12-24 | 2019-04-26 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109684991B (en) * | 2018-12-24 | 2021-10-01 | 北京旷视科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN109816071A (en) * | 2019-02-12 | 2019-05-28 | 河南工程学院 | An RFID-based indoor target tracking method |
CN110070565A (en) * | 2019-03-12 | 2019-07-30 | 杭州电子科技大学 | A kind of ship trajectory predictions method based on image superposition |
CN110033050A (en) * | 2019-04-18 | 2019-07-19 | 杭州电子科技大学 | A kind of water surface unmanned boat real-time target detection calculation method |
CN110033050B (en) * | 2019-04-18 | 2021-06-22 | 杭州电子科技大学 | A real-time target detection calculation method for surface unmanned ship |
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN112464886A (en) * | 2020-12-14 | 2021-03-09 | 上海交通大学 | Aircraft identification tracking method |
CN115685991A (en) * | 2022-09-27 | 2023-02-03 | 珠海格力智能装备有限公司 | Deviation rectifying method for traveling path |
Also Published As
Publication number | Publication date |
---|---|
CN108492324B (en) | 2021-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108492324B (en) | Airplane tracking method based on full-connection network and Kalman filter | |
CN110222581B (en) | Binocular camera-based quad-rotor unmanned aerial vehicle visual target tracking method | |
CN110007675B (en) | Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle | |
CN108230361B (en) | Method and system for enhancing target tracking by fusing unmanned aerial vehicle detector and tracker | |
CN109800689B (en) | Target tracking method based on space-time feature fusion learning | |
CN108536171B (en) | A Path Planning Method for Cooperative Tracking of Multiple UAVs under Multiple Constraints | |
CN104200494B (en) | Real-time visual target tracking method based on light streams | |
CN104050685B (en) | Moving target detecting method based on particle filter visual attention model | |
CN109074490B (en) | Path detection method, related device and computer readable storage medium | |
CN108010067A (en) | A kind of visual target tracking method based on combination determination strategy | |
CN110276785B (en) | Anti-shielding infrared target tracking method | |
Qi et al. | Autonomous landing solution of low-cost quadrotor on a moving platform | |
CN111142557A (en) | Unmanned aerial vehicle path planning method and system, computer equipment and readable storage medium | |
CN113327296B (en) | Laser radar and camera online combined calibration method based on depth weighting | |
CN104899590A (en) | Visual target tracking method and system for unmanned aerial vehicle | |
Zhang et al. | A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision | |
KR102320999B1 (en) | Learning method and learning device for removing jittering on video acquired through shaking camera by using a plurality of neural networks for fault tolerance and fluctuation robustness in extreme situations, and testing method and testing device using the same | |
CN111680713B (en) | A UAV ground target tracking and approximation method based on visual detection | |
CN112486197A (en) | Fusion positioning tracking control method based on self-adaptive power selection of multi-source image | |
CN112904388A (en) | Fusion positioning tracking control method based on navigator strategy | |
Cai et al. | Dynamic illumination optical flow computing for sensing multiple mobile robots from a drone | |
CN111738085A (en) | System construction method and device for simultaneous positioning and mapping of automatic driving | |
CN103996207A (en) | Object tracking method | |
CN111611869B (en) | End-to-end monocular vision obstacle avoidance method based on serial deep neural network | |
CN110377033A (en) | A kind of soccer robot identification based on RGBD information and tracking grasping means |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |