[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021189507A1 - Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method - Google Patents

Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method Download PDF

Info

Publication number
WO2021189507A1
WO2021189507A1 PCT/CN2020/082257 CN2020082257W WO2021189507A1 WO 2021189507 A1 WO2021189507 A1 WO 2021189507A1 CN 2020082257 W CN2020082257 W CN 2020082257W WO 2021189507 A1 WO2021189507 A1 WO 2021189507A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
tracking
detection
vehicle
algorithm
Prior art date
Application number
PCT/CN2020/082257
Other languages
French (fr)
Chinese (zh)
Inventor
余犀
董晓飞
石霖
曹峰
孙明俊
Original Assignee
南京新一代人工智能研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京新一代人工智能研究院有限公司 filed Critical 南京新一代人工智能研究院有限公司
Publication of WO2021189507A1 publication Critical patent/WO2021189507A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Definitions

  • the invention relates to the field of rotary wing unmanned aerial vehicles, in particular to a rotary wing unmanned aerial vehicle system used for vehicle detection and tracking and a detection and tracking method.
  • rotary-wing drones have many advantages such as no site restrictions, fixed-point hovering, slow flight, limited space flight, vertical take-off and landing, etc., making them suitable for aerial photography, agriculture, plant protection, micro selfies, and express transportation.
  • Disaster rescue, surveillance of infectious diseases, surveying and mapping, news reports, power inspections, disaster relief, film and television shooting and other fields have a wide range of applications.
  • artificial intelligence technology has developed rapidly, and the combination of drones and artificial intelligence technology has become a new research focus.
  • Target detection and target tracking technology based on deep learning endows drones with "intelligence”, enabling drones to have a wider information search area, further improving the ability to interpret and analyze local micro-information, and to perceive the surrounding environment.
  • the accuracy of measurement and measurement has also been further improved.
  • the empowerment of UAVs by artificial intelligence technology has added a pair of keen "eyes" to UAVs, enabling them to fly autonomously and perform higher-level tasks.
  • the mainstream rotary-wing drones for target detection algorithms use the YOLOv3 neural network architecture model.
  • the ground station has strong computing power and can use large networks for object detection.
  • the network model structure of YOLOv3 is mainly composed of 75 volume base layers. Without using a fully connected layer, the network can correspond to input images of any size.
  • the pooling layer is not used. Instead, the stride of the volume base layer is set to 2 to achieve the down-sampling effect, while transferring the scale-invariant features to the next layer.
  • YOLOv3 also uses structures similar to ResNet and FPN networks, these two structures are also of great benefit to improving the detection accuracy. This network is better for motor vehicle detection from the perspective of drones. For larger network models, the calculation of the above-mentioned network structure is time-consuming, and the real-time performance is poor, requiring strong computing power support, and cannot be deployed on the end side.
  • KCF Kernel Correlation Filtering
  • CSK detection-based tracking loop structure
  • KCF converts the descriptor of each channel of the HOG feature descriptor into a cyclic matrix through cyclic shift.
  • the circulant matrix can be diagonalized by discrete Fourier transform (DFT). Therefore, matrix calculations can be effectively processed in the Fourier domain, especially matrix inversion.
  • a kernel function is applied in the KCF tracker to improve the tracking performance, and the regression function f(z) is mapped to the nonlinear space.
  • the ground station is used for calculation and reasoning to ensure sufficient computing resources, the time consumption of large-scale data transmission is unacceptable, the uncertainty of the wireless transmission network also has time delay, and the ground station needs to pay for the speed of the reasoning calculation. High cost.
  • the real-time nature of flight control is a necessary factor for UAV safety. If the delay is high, it will inevitably affect the detection and tracking effect and flight safety.
  • the KCF algorithm is affected by the uncertainties of the external environment, such as illumination changes, occlusion, etc., which will cause the target tracking to be lost, and the target cannot be repositioned after the target is lost, which will eventually lead to the tracking failure.
  • the present invention proposes a rotary wing UAV system for vehicle detection and tracking.
  • the UAV end-side calculation scheme is adopted, and the ground station is only used for tracking video monitoring and tracking.
  • Manual flight control command issuance operation effectively improves the timeliness performance of the system.
  • the YOLO Nano target detection algorithm solves the problem of heavy computing load on the airborne computer; through the Staple-based tracking algorithm and the target re-detection module, the KCF algorithm is limited to the external environment and the tracking is unstable.
  • a rotary-wing UAV system for vehicle detection and tracking including UAV platform and ground station platform, using UAV end-to-side computing solution, greatly reducing system delay and ensuring motor vehicle targets Real-time detection and tracking.
  • the UAV platform is used for real-time calculation and detection of tracking targets;
  • the ground station platform is used for tracking and video monitoring of the targets, and issuing manual flight control instructions to the UAV platform.
  • the unmanned aerial vehicle platform includes: a visible light camera, an onboard computer, a first wireless image transmission terminal, and a flight control module, wherein the onboard computer is respectively connected to the visible light camera, the first wireless image transmission terminal, and the flight control module. Modules are connected; the ground station platform includes a PC and a second wireless image transmission terminal, which exchange information; the first wireless image transmission terminal and the second wireless image transmission terminal exchange information;
  • the visible light camera is used to collect image data
  • the onboard computer is used to run a target detection algorithm and a target tracking algorithm
  • the first wireless image transmission terminal is used to transmit a real-time video stream of target tracking and receive manual flight control instructions from a ground station;
  • the PC is used for real-time video stream monitoring of target tracking and manual flight control command issuance;
  • the second wireless image transmission terminal is used to receive a real-time video stream of target tracking and send manual flight control instructions.
  • the target detection algorithm run by the onboard computer adopts the YOLO Nano algorithm.
  • the target tracking algorithm run by the onboard computer adopts the Staple tracking algorithm.
  • the on-board computer also includes a target re-detection module, which is used to determine whether the target is occluded according to the correlation value between the test sample of the Staple tracking algorithm and the training sample, and set the threshold of the correlation value.
  • the threshold value judges that there is occlusion. If there is occlusion, the predicted value of the target is copied to the measured value, and the measured value is corrected to obtain the estimated value of the target position.
  • the onboard computer is used to deploy the Ubuntu ROS operating system, which includes a camera node, a target detection node, a target tracking node, and a flight control node; among them, the camera node is used to collect image data, and the target detection node is used for all For vehicle positioning, the target tracking node is used to track the target vehicle, and the flight control node is used to control the flight of the rotary-wing UAV.
  • the Ubuntu ROS operating system which includes a camera node, a target detection node, a target tracking node, and a flight control node; among them, the camera node is used to collect image data, and the target detection node is used for all
  • the target tracking node is used to track the target vehicle
  • the flight control node is used to control the flight of the rotary-wing UAV.
  • the UAV platform calculates and detects and tracks the target in real time; the ground station platform sends flight control instructions to the UAV through wireless communication to control the flight of the aircraft.
  • the method specifically includes the following steps:
  • the visible light camera collects image data and publishes image topics through the camera node of the onboard computer;
  • the target detection node subscribes to the image topic and uses it as the input of the target detection node.
  • the onboard computer calculates the vehicle coordinate information according to the target detection algorithm and publishes the topic of vehicle coordinate information;
  • the target tracking node subscribes to the topic of vehicle coordinate information, and the onboard computer predicts the location of the target vehicle according to the target tracking algorithm, and publishes the topic of the target location;
  • the flight control node subscribes to the target location topic, performs coordinate conversion, calculates the distance between the target and the UAV, and sends flight control instructions to the flight control module accordingly;
  • the flight control module executes instructions to control the movement of the drone.
  • the target detection algorithm adopts the YOLO Nano algorithm.
  • the use of this compact network architecture greatly reduces the size of the model while ensuring the detection accuracy, making end-side computing time-consuming to meet the requirements, and adapting to the computing power of the airborne computer.
  • the target tracking algorithm specifically includes the following steps:
  • the re-detection algorithm based on Kalman filter can quickly search for the target again after the target is lost, ensuring that the UAV has a long-term and long-term response to the vehicle target in a complex environment. Stable tracking.
  • FIG. 1 is a block diagram of the node design of the airborne computer ROS system of the present invention
  • FIG. 2 is a block diagram of the system hardware structure of the present invention.
  • FIG. 3 is a block diagram of the system software structure of the present invention.
  • Figure 4 is a flow chart of the improved tracking algorithm of the present invention.
  • the present invention proposes a rotary wing unmanned aerial vehicle system for vehicle detection and tracking, aiming at practical application scenarios, that is, automatic detection and tracking of motor vehicles.
  • the rotary-wing drone obtains real-time video streams on the ground through the mounted visible light camera, and can transmit the video streams to the ground workstation in real time, and detect vehicle targets in the image through the on-board target detection algorithm.
  • the ground station manually selects the vehicle targets to be tracked in the video.
  • the airborne target tracking algorithm is activated, and the rotary-wing UAV will automatically fly to track the selected target on the ground.
  • the airborne computing device NVIADIA Jetson TX2 is used to run the Ubuntu ROS system, and the camera node, target detection node, target tracking node and flight control node are deployed and integrated on the ROS system.
  • FIG. 1 is a block diagram of the node design of the airborne computer ROS system of the present invention.
  • the workflow of the UAV platform includes the following steps:
  • the visible light camera collects image data and publishes image topics through the camera node of the onboard computer;
  • the target detection node subscribes to the image topic and uses it as the input of the target detection node.
  • the onboard computer calculates the target coordinate information according to the target detection algorithm and publishes the target coordinate information topic;
  • the target tracking node subscribes to the topic of coordinate information, and the onboard computer predicts the target location according to the target tracking algorithm, and publishes the target location topic;
  • the flight control node subscribes to the target location topic, performs coordinate conversion, calculates the distance between the target and the aircraft, and sends flight control instructions to the flight control module accordingly;
  • the flight control module executes instructions to control the movement of the drone.
  • FIG. 2 is a block diagram of the hardware structure of the system of the present invention, including: the present invention is applied to the detection and tracking of ground motor vehicles by a rotary-wing drone.
  • the drone load includes a visible light camera, TX2 airborne computer, wireless image transmission, etc., TX2
  • the Ubuntu camera deployed on the airborne computer adopts a gimbal camera with self-stabilization function, which can shoot 1080P resolution video, and the acquisition rate is 30FPS.
  • the gimbal camera is fixed under the drone and shoots the ground at a fixed angle.
  • the algorithm processing unit is a TX2 airborne computer, with the Ubuntu16.04 operating system installed, and the ROS version is kinetic.
  • Figure 3 is a block diagram of the system software structure of the present invention, which includes: Compared with ground station for inference calculation, the computing power of TX2 airborne computer is greatly reduced. Therefore, the target detection algorithm based on deep learning needs to be adjusted according to the computing power.
  • the traditional The size of the YOLOv3 target detection model is 240M, the calculation complexity is too large, and it is no longer suitable for edge devices. The original network needs to be pruned.
  • the size of YOLO Nano is only about 4.0MB, which is 15.1 times and 8.3 times smaller than Tiny YOLOv2 and Tiny YOLOv3, respectively. It requires 4.57B inference operations in calculation, which is 34% and 17% less than the latter two networks.
  • the network structure mainly includes the residual PEP macro architecture and the fully connected attention macro architecture FCA.
  • PEP consists of a 1*1 convolutional mapping layer, which maps the input feature map to a lower-dimensional tensor.
  • the num in PEP(num) is the lower dimension; a 1*1 convolutional expansion layer, It will expand the channel of the feature map to higher dimensions; a depth-wise convolution layer, which will perform spatial convolution on different expansion layer output channels through different filters; a 1*1 Convolutional mapping layer, which maps the output channels of the previous layer to lower dimensions.
  • the first two steps combine cross-channel fusion features; the second step increases the feature dimension, so that more channel features in the third step are used for spatial feature fusion (to improve the abstraction and characterization capabilities of features); the third step is the deep convolution part ( Spatial convolution); the fourth step is the point-by-point convolution part (channel convolution), which reduces the huge amount of calculation caused by the reduction of the channel after the convolution; the latter two parts form a deep separable convolution, which can reduce the computational complexity Guarantee the characterization ability of the model.
  • the use of the residual PEP macro-architecture can significantly reduce the complexity of architecture and calculations, while ensuring the characterization ability of the model.
  • the FCA macro architecture consists of two fully connected layers, which can learn the dynamic and non-linear internal dependencies between channels, and re-weight the importance of the channels through channel-level multiplication.
  • the use of FCA helps to focus on more informative features based on global information, because it calibrates the dynamic features again. This can make more effective use of the ability of neural networks, that is, express important information as much as possible with limited parameters. Therefore, this module can make a better trade-off between pruning the model architecture, reducing model complexity, and increasing model representation.
  • YOLO Nano The size of YOLO Nano is only about 4.0MB, which is 15.1 times and 8.3 times smaller than Tiny YOLOv2 and Tiny YOLOv3, respectively. It requires 4.57B inference operations in calculation, which is 34% and 17% less than the latter two networks. In the above, 69.1% mAP was achieved in the VOC2007 data set, and the accuracy rate was improved by 12 points and 10.7 points respectively compared with the latter two. Therefore, deploying the YOLO Nano algorithm on TX2 can significantly reduce the computational pressure and ensure the accuracy of target detection.
  • the Staple algorithm is robust to motion blur and illuminance when considering HOG features for correlation filtering, but it is not robust to deformation. If the target is deformed, the color distribution of the entire target will basically not change. Therefore, the color histogram is very robust to deformation. On the other hand, the color histogram is not robust to light changes, which can be complemented by HOG features. Therefore, consider dividing into two channels and using these two features at the same time. Use HOG features to learn related filters to obtain a filter template, and update the template using the given formula. Use the color feature to learn the filter template, and then use the given update formula to update the learned template.
  • Two templates are used to predict the target position and then weighted and averaged to obtain a composite response map, and the position of the maximum value in the response map is the target location.
  • the Staple tracking algorithm overcomes some of the shortcomings of the KCF algorithm, there is still no reliable solution to target occlusion and loss. Therefore, the re-detection module is added to the Staple algorithm framework to effectively solve this problem. Specifically, the position of the target in the next frame is estimated, and then the estimated position is sampled to further lock the position of the target.
  • the Kalman filter can establish a linear motion model of the target body, and the state of the target can be optimally estimated through the input value and output value of the model. Therefore, the Kalman filter is used to establish the target motion model to predict the next moment of the target. Position, camera shake can be regarded as Gaussian noise.
  • the motion model and observation equation of the target can be expressed in the following form. Since the sampling interval between the two frames of images is very short, the motion of the target between the two frames is simplified to a uniform motion.
  • x and y respectively represent the component of the pixel distance between the position of the target and the center of the image on the u-axis and the v-axis, with Represent the components of the target's moving speed on the u-axis and v-axis, respectively. Because the acceleration of the target body movement is random, it can be with Think of it as Gaussian noise.
  • ⁇ t is the time interval between adjacent moments.
  • the observation value at time k is:
  • Fig. 4 is a flow chart of the improved tracking algorithm of the present invention, which includes: first perform the initialization work of the Kalman filter and the Staple tracking algorithm, and then perform the target tracking in the image sequence. In the tracking process, first predict the position of the target in the k frame according to the target state in the k-1 frame, and then sample the image block at the predicted position and input it to the Staple tracking algorithm to obtain the measured value of the target position in the image, and then according to Staple tracking algorithm tests the value of the correlation between the sample and the training sample to determine whether the target is occluded, and set a threshold for the correlation value. If it is lower than the threshold, it is determined that there is occlusion.
  • the predicted value of the target is copied Give the measured value.
  • the next step is to correct the target measurement value, and finally get the estimated value of the target position. In this process, after the target position of each frame is corrected, the target state of the previous frame is updated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Disclosed in the present application are a rotor unmanned aerial vehicle system for vehicle detection and tracking, and a detection and tracking method. The system comprises an unmanned aerial vehicle platform and a ground station platform, and the unmanned aerial vehicle platform is used for calculating and detecting a tracking target in real time; the ground station platform is used for carrying out tracking and video monitoring on the target and issuing a manual flight control instruction to the unmanned aerial vehicle platform. The present invention further provides a detection and tracking method based on a rotor unmanned aerial vehicle system. The present invention adopts an unmanned aerial vehicle end-side calculation scheme, so that the timeliness of the system is effectively improved; a YOLO Nano target detection algorithm is adopted, so that the operation pressure of an airborne computer is remarkably reduced; by using a Staple tracking algorithm having higher robustness and a target re-detection module, the tracking precision is improved, and the re-detection algorithm is automatically started to re-search a target when the target is shielded or lost in real-time tracking.

Description

一种用于车辆检测跟踪的旋翼无人机系统及检测跟踪方法Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method 技术领域Technical field
本发明涉及旋翼无人机领域,特别是涉及一种用于车辆检测跟踪的旋翼无人机系统及检测跟踪方法。The invention relates to the field of rotary wing unmanned aerial vehicles, in particular to a rotary wing unmanned aerial vehicle system used for vehicle detection and tracking and a detection and tracking method.
背景技术Background technique
无人机种类繁多,其中旋翼无人机拥有无场地限制、能定点悬停、慢速飞行、有限空间飞行、垂直起降等诸多优势,使得其在航拍、农业、植保、微型自拍、快递运输、灾难救援、监控传染病、测绘、新闻报道、电力巡检、救灾、影视拍摄等领域有广泛应用。近年来,人工智能技术发展迅猛,无人机和人工智能技术的结合成为一个新的研究热点。基于深度学习的目标检测、目标跟踪技术赋予无人机于“智能”,使得无人机拥有更广的信息搜索区域,对局部微小信息的解读和分析能力更进一步提升,对周围环境的感知能力和测量精度也进一步提升,人工智能技术对无人机的赋能,使得无人机增添了一双敏锐的“眼睛”,使其可以自主飞行,执行更高级别的任务。There are many types of drones. Among them, rotary-wing drones have many advantages such as no site restrictions, fixed-point hovering, slow flight, limited space flight, vertical take-off and landing, etc., making them suitable for aerial photography, agriculture, plant protection, micro selfies, and express transportation. , Disaster rescue, surveillance of infectious diseases, surveying and mapping, news reports, power inspections, disaster relief, film and television shooting and other fields have a wide range of applications. In recent years, artificial intelligence technology has developed rapidly, and the combination of drones and artificial intelligence technology has become a new research focus. Target detection and target tracking technology based on deep learning endows drones with "intelligence", enabling drones to have a wider information search area, further improving the ability to interpret and analyze local micro-information, and to perceive the surrounding environment. The accuracy of measurement and measurement has also been further improved. The empowerment of UAVs by artificial intelligence technology has added a pair of keen "eyes" to UAVs, enabling them to fly autonomously and perform higher-level tasks.
随着数字成像技术的发展,相机作为一种传感器开始被广泛研究。因为人可以通过自己的视觉估计视野中物体的位置、距离,而相机的原理模拟了人的双眼,所以模仿人的特点,利用相机的二维图像可以反推图像中物体的三维信息。视觉感知系统应用于无人机由来已久,旋翼无人机对地面目标的检测和跟踪是一个热门的场景应用,由于基于深度学习的目标检测和目标跟踪算法都依赖于强大的算力,而无人机端侧算力有限,因此现阶段大多数方案都采取地面站做计算,无人机仅负责采集和传输图像,且对于目标跟踪算法而言,现阶段算法无法对目标遮挡和丢失做有效的处理。With the development of digital imaging technology, cameras as a sensor have been widely studied. Because people can estimate the position and distance of objects in the field of vision through their own vision, and the principle of the camera simulates human eyes, so imitating the characteristics of people, using the two-dimensional image of the camera can reverse the three-dimensional information of the object in the image. Visual perception systems have been used in UAVs for a long time. The detection and tracking of ground targets by rotary-wing UAVs is a popular scene application. Since deep learning-based target detection and target tracking algorithms both rely on powerful computing power, The end-side computing power of UAVs is limited. Therefore, most of the solutions at this stage use ground stations for calculations. UAVs are only responsible for collecting and transmitting images. For target tracking algorithms, the algorithms at this stage cannot do anything about occlusion and loss of targets. Effective processing.
现阶段主流的旋翼无人机进行目标检测算法采用了YOLOv3神经网络架构模型,地面站有强大的算力支持,可采用大网络做物体检测,YOLOv3的网络模型结构主要由75个卷基层构成,没有使用全连接层,该网络可以对应任意大小的输入图像。此外,没有使用池化层,取而代之的是将卷基层的stride设为2来达到下采样的效果,同时将尺度不变特征传送到下一层。除此之外,YOLOv3中还使用了类似ResNet和FPN网络的结构,这两个结构对于提高检测精度也是大有裨益。此网络应用于无人机视角的机动车辆检测效果较好。而对于较大的网络模型而言,上述网络结构计算耗时大,实时性较差,需要强大的算力支持,无法部署在端侧。At this stage, the mainstream rotary-wing drones for target detection algorithms use the YOLOv3 neural network architecture model. The ground station has strong computing power and can use large networks for object detection. The network model structure of YOLOv3 is mainly composed of 75 volume base layers. Without using a fully connected layer, the network can correspond to input images of any size. In addition, the pooling layer is not used. Instead, the stride of the volume base layer is set to 2 to achieve the down-sampling effect, while transferring the scale-invariant features to the next layer. In addition, YOLOv3 also uses structures similar to ResNet and FPN networks, these two structures are also of great benefit to improving the detection accuracy. This network is better for motor vehicle detection from the perspective of drones. For larger network models, the calculation of the above-mentioned network structure is time-consuming, and the real-time performance is poor, requiring strong computing power support, and cannot be deployed on the end side.
现阶段主流的旋翼无人机进行目标跟踪算法是基于KCF技术。KCF(核相关滤波)是由核的基于检测的跟踪循环结构(CSK)发展而来,采用在线学习方法解决跟踪问题。它是一种没有任何先验知识的机器学习方法。在第一帧中,手动选择感兴趣的对象(OOI)区域,KCF跟踪器将该区域转换为多通道HOG特征描述符。利用HOG描述符进行岭回归,初始化OOI区域z的回归函数f(z)。对于新一帧,f(z)是在OOI的最后一个区域附近的几个区域上求值的。最后,将评价响应最大的区域作为输出,应用于更新f(z)。为了加快山脊回归矩阵的计算,KCF通过循环移位将HOG特征描述符的每个通道的描述符转换为一个循环矩阵。通过离散傅里叶变换(DFT)可以将循环矩阵的对角化。因此,可以在傅立叶域中有效地处理矩阵计算,特别是矩阵求逆矩阵。此外,在KCF跟踪器中还应用了一个核函数以提高跟踪性能,将回归函数f(z)映射到非线性空间。这些解决方案由CSK引入,并在KCF中进行了优化。这样,KCF的处理速度和平均精度分别达到了172FPS和73.2%。At this stage, the mainstream rotary-wing UAV target tracking algorithm is based on KCF technology. KCF (Kernel Correlation Filtering) is developed from the detection-based tracking loop structure (CSK) of the kernel, which uses online learning methods to solve tracking problems. It is a machine learning method without any prior knowledge. In the first frame, manually select an object of interest (OOI) area, and the KCF tracker converts this area into a multi-channel HOG feature descriptor. The HOG descriptor is used to perform ridge regression, and the regression function f(z) of the OOI area z is initialized. For a new frame, f(z) is evaluated on several areas near the last area of OOI. Finally, the region with the largest evaluation response is used as the output to update f(z). In order to speed up the calculation of the ridge regression matrix, KCF converts the descriptor of each channel of the HOG feature descriptor into a cyclic matrix through cyclic shift. The circulant matrix can be diagonalized by discrete Fourier transform (DFT). Therefore, matrix calculations can be effectively processed in the Fourier domain, especially matrix inversion. In addition, a kernel function is applied in the KCF tracker to improve the tracking performance, and the regression function f(z) is mapped to the nonlinear space. These solutions were introduced by CSK and optimized in KCF. In this way, the processing speed and average accuracy of KCF reached 172FPS and 73.2%, respectively.
尽管以地面站做计算推理能保证足够的计算资源,然而大规模数据传输耗时是不可接受的,无线传输网络的不确定性也存在时延,且地面站要保证推理计算的速度是需要付出高昂的成本。Although the ground station is used for calculation and reasoning to ensure sufficient computing resources, the time consumption of large-scale data transmission is unacceptable, the uncertainty of the wireless transmission network also has time delay, and the ground station needs to pay for the speed of the reasoning calculation. High cost.
飞行控制的实时性是无人机安全的必要因素,若延迟较高,势必影响检测跟踪效果以及飞行安全。此外,在目标跟踪过程中,KCF算法受外界环境的不确定如光照变化、遮挡等因素的影响会导致目标跟踪丢失,且目标丢失后不能重新定位目标,最终导致跟踪失败。The real-time nature of flight control is a necessary factor for UAV safety. If the delay is high, it will inevitably affect the detection and tracking effect and flight safety. In addition, in the target tracking process, the KCF algorithm is affected by the uncertainties of the external environment, such as illumination changes, occlusion, etc., which will cause the target tracking to be lost, and the target cannot be repositioned after the target is lost, which will eventually lead to the tracking failure.
发明内容Summary of the invention
发明目的:为解决现有技术飞行控制延迟高的问题,本发明提出了一种用于车辆检测跟踪的旋翼无人机系统,采用无人机端侧计算方案,地面站仅做跟踪视频监控和手动飞控指令下发操作,有效提高了系统时效性能。进一步通过YOLO Nano目标检测算法解决了机载计算机运算负荷大的问题;通过基于Staple跟踪算法,并通过目标重检测模块,解决了KCF算法受限于外界环境致使跟踪不稳定的问题。Purpose of the invention: In order to solve the problem of high flight control delay in the prior art, the present invention proposes a rotary wing UAV system for vehicle detection and tracking. The UAV end-side calculation scheme is adopted, and the ground station is only used for tracking video monitoring and tracking. Manual flight control command issuance operation effectively improves the timeliness performance of the system. Furthermore, the YOLO Nano target detection algorithm solves the problem of heavy computing load on the airborne computer; through the Staple-based tracking algorithm and the target re-detection module, the KCF algorithm is limited to the external environment and the tracking is unstable.
技术方案:一种用于车辆检测跟踪的旋翼无人机系统,包括无人机平台和地面站平台,采用无人机端侧计算方案,大幅度减小了系统时延,保证了机动车辆目标的检测和跟踪的实时性。所述无人机平台用于实时计算并检测跟踪目标;所述地面站平台用于对目标进行跟踪视频监控,对无人机平台下发手动飞控指令。Technical solution: A rotary-wing UAV system for vehicle detection and tracking, including UAV platform and ground station platform, using UAV end-to-side computing solution, greatly reducing system delay and ensuring motor vehicle targets Real-time detection and tracking. The UAV platform is used for real-time calculation and detection of tracking targets; the ground station platform is used for tracking and video monitoring of the targets, and issuing manual flight control instructions to the UAV platform.
进一步地,所述无人机平台包括:可见光摄像机、机载计算机、第一无线图传终端和飞控模块,其中,所述机载计算机分别与可见光摄像机、第一无线图传 终端、飞控模块相连接;所述地面站平台包括PC机和第二无线图传终端,两者信息交互;第一无线图传终端与第二无线图传终端信息交互;Further, the unmanned aerial vehicle platform includes: a visible light camera, an onboard computer, a first wireless image transmission terminal, and a flight control module, wherein the onboard computer is respectively connected to the visible light camera, the first wireless image transmission terminal, and the flight control module. Modules are connected; the ground station platform includes a PC and a second wireless image transmission terminal, which exchange information; the first wireless image transmission terminal and the second wireless image transmission terminal exchange information;
所述可见光摄像机,用于采集图像数据;The visible light camera is used to collect image data;
所述机载计算机,用于运行目标检测算法和目标跟踪算法;The onboard computer is used to run a target detection algorithm and a target tracking algorithm;
所述第一无线图传终端,用于传输目标跟踪实时视频流和地面站手动飞控指令接收;The first wireless image transmission terminal is used to transmit a real-time video stream of target tracking and receive manual flight control instructions from a ground station;
所述PC机,用于目标跟踪实时视频流监控和手动飞控指令下发;The PC is used for real-time video stream monitoring of target tracking and manual flight control command issuance;
所述第二无线图传终端,用于接收目标跟踪实时视频流和手动飞控指令发送。The second wireless image transmission terminal is used to receive a real-time video stream of target tracking and send manual flight control instructions.
进一步地,所述机载计算机运行的目标检测算法采用YOLO Nano算法。Further, the target detection algorithm run by the onboard computer adopts the YOLO Nano algorithm.
进一步地,所述机载计算机运行的目标跟踪算法采用Staple跟踪算法。Further, the target tracking algorithm run by the onboard computer adopts the Staple tracking algorithm.
进一步地,所述机载计算机还包括目标重检测模块,用于根据Staple跟踪算法的测试样本与训练样本相关性的值,判断目标是否被遮挡,设置相关性的值的阈值,若低于该阈值则判断有遮挡,若有遮挡,则将目标的预测值复制给测量值,对测量值进行修正,得到目标位置的估计值。Further, the on-board computer also includes a target re-detection module, which is used to determine whether the target is occluded according to the correlation value between the test sample of the Staple tracking algorithm and the training sample, and set the threshold of the correlation value. The threshold value judges that there is occlusion. If there is occlusion, the predicted value of the target is copied to the measured value, and the measured value is corrected to obtain the estimated value of the target position.
进一步地,所述机载计算机用于部署Ubuntu ROS操作系统,该系统包含相机节点、目标检测节点、目标跟踪节点、飞控节点;其中,相机节点用于采集图像数据,目标检测节点用于所有车辆的定位,目标跟踪节点用于对目标车辆进行跟踪,飞控节点用于旋翼无人机的飞行控制。Further, the onboard computer is used to deploy the Ubuntu ROS operating system, which includes a camera node, a target detection node, a target tracking node, and a flight control node; among them, the camera node is used to collect image data, and the target detection node is used for all For vehicle positioning, the target tracking node is used to track the target vehicle, and the flight control node is used to control the flight of the rotary-wing UAV.
本发明所述的用于车辆检测跟踪的旋翼无人机系统的检测跟踪方法,其特征在于,该方法包括:The detection and tracking method of the rotary wing unmanned aerial vehicle system for vehicle detection and tracking according to the present invention is characterized in that the method includes:
无人机平台实时计算并检测跟踪目标;地面站平台通过无线通信向无人机发送飞控指令,控制飞机飞行。The UAV platform calculates and detects and tracks the target in real time; the ground station platform sends flight control instructions to the UAV through wireless communication to control the flight of the aircraft.
进一步地,该方法具体包括如下步骤:Further, the method specifically includes the following steps:
(1)可见光摄像机采集图像数据,通过机载计算机的相机节点发布图像话题;(1) The visible light camera collects image data and publishes image topics through the camera node of the onboard computer;
(2)目标检测节点订阅图像话题,将其作为目标检测节点的输入,机载计算机根据目标检测算法计算车辆坐标信息并发布车辆坐标信息话题;(2) The target detection node subscribes to the image topic and uses it as the input of the target detection node. The onboard computer calculates the vehicle coordinate information according to the target detection algorithm and publishes the topic of vehicle coordinate information;
(3)目标跟踪节点订阅车辆坐标信息话题,机载计算机根据目标跟踪算法预测目标车辆位置,并发布目标位置话题;(3) The target tracking node subscribes to the topic of vehicle coordinate information, and the onboard computer predicts the location of the target vehicle according to the target tracking algorithm, and publishes the topic of the target location;
(4)飞控节点订阅目标位置话题,进行坐标转换,计算出目标与无人机的距离,并依此发送飞控指令给飞控模块;(4) The flight control node subscribes to the target location topic, performs coordinate conversion, calculates the distance between the target and the UAV, and sends flight control instructions to the flight control module accordingly;
(5)飞控模块执行指令控制无人机运动。(5) The flight control module executes instructions to control the movement of the drone.
进一步地,所述目标检测算法采用YOLO Nano算法。采用该紧凑型网络架构,在保证检测准确率的同时,大幅度降低了模型大小,使得端侧运算耗时满足要求,适应了机载计算机运算能力。Further, the target detection algorithm adopts the YOLO Nano algorithm. The use of this compact network architecture greatly reduces the size of the model while ensuring the detection accuracy, making end-side computing time-consuming to meet the requirements, and adapting to the computing power of the airborne computer.
进一步地,所述目标跟踪算法具体包括如下步骤:Further, the target tracking algorithm specifically includes the following steps:
(1)进行卡尔曼滤波器和Staple跟踪算法的各项初始化工作;(1) Perform the initialization of Kalman filter and Staple tracking algorithm;
(2)进行图像序列中的目标跟踪;(2) Perform target tracking in the image sequence;
(3)在跟踪过程中,首先是根据k-1帧中的目标状态预测k帧中目标车辆的位置,然后在预测位置进行图像块的采样输入至Staple跟踪算法获得目标车辆在图像中位置的测量值,接着根据Staple跟踪算法的测试样本与训练样本相关性的值,对目标是否有遮挡进行判断,对相关性的值设置一个阈值,若低于该阈值则判断有遮挡,若有遮挡,则将目标车辆的预测值复制给测量值;(3) In the tracking process, first predict the position of the target vehicle in frame k based on the target state in frame k-1, and then sample the image block at the predicted position and input it to the Staple tracking algorithm to obtain the position of the target vehicle in the image. Measure the value, and then judge whether the target is occluded according to the correlation value between the test sample of the Staple tracking algorithm and the training sample. Set a threshold for the correlation value. If it is lower than the threshold, determine that there is occlusion. If there is occlusion, Copy the predicted value of the target vehicle to the measured value;
(4)对目标测量值进行修正,得到对目标车辆位置的估计值;(4) Correct the target measurement value to obtain the estimated value of the target vehicle position;
(5)对前一帧的目标状态进行更新。(5) Update the target state of the previous frame.
当环境的复杂性导致无人机跟踪过程中丢失目标时,基于卡尔曼滤波的重检测算法能在目标丢失后重新快速搜寻目标,保证了无人机在复杂环境下对车辆目标的长时且稳定的跟踪。When the complexity of the environment causes the UAV to lose the target during the tracking process, the re-detection algorithm based on Kalman filter can quickly search for the target again after the target is lost, ensuring that the UAV has a long-term and long-term response to the vehicle target in a complex environment. Stable tracking.
有益效果:将算法部署在无人机端侧,大幅度减小了系统时延,保证了机动车辆目标的检测和跟踪的实时性;由于无人机端侧计算设备的算力限制,采用YOLO Nano算法,在保证检测准确率的同时,大幅度降低了模型大小,使得端侧运算耗时满足要求。采用Stape跟踪算法并嵌入重检测模块,使得跟踪算法能在目标丢失后重新快速搜寻目标,确保了跟踪的长时性。Beneficial effects: Deploying the algorithm on the UAV end side greatly reduces the system delay and ensures the real-time detection and tracking of motor vehicle targets; due to the computing power limitation of the UAV end side computing equipment, YOLO is adopted The Nano algorithm greatly reduces the size of the model while ensuring the accuracy of the detection, making the end-side computing time-consuming to meet the requirements. The Stape tracking algorithm is used and the re-detection module is embedded, so that the tracking algorithm can quickly search for the target again after the target is lost, ensuring the long-term tracking.
附图说明Description of the drawings
图1是本发明的机载计算机ROS系统节点设计框图;Figure 1 is a block diagram of the node design of the airborne computer ROS system of the present invention;
图2是本发明的系统硬件结构框图;Figure 2 is a block diagram of the system hardware structure of the present invention;
图3是本发明的系统软件结构框图;Figure 3 is a block diagram of the system software structure of the present invention;
图4是本发明的改进后的跟踪算法流程图。Figure 4 is a flow chart of the improved tracking algorithm of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式,对本发明的技术方案作进一步的介绍。The technical solution of the present invention will be further introduced below in conjunction with the drawings and specific embodiments.
本发明提出一种用于车辆检测跟踪的旋翼无人机系统,针对实际应用场景,即对机动车辆的自动检测和跟踪。旋翼无人机通过挂载的可见光摄像头获取地面实时视频流,并能将视频流实时传输到地面工作站,通过机载目标检测算法检测图像中车辆目标,地面站手动选择视频中需跟踪的车辆目标后,机载目标跟踪算 法启动,旋翼无人机自动飞行跟踪地面选中目标。采用机载计算设备NVIADIA Jetson TX2运行Ubuntu ROS系统,在ROS系统上分别部署并整合相机节点、目标检测节点、目标跟踪节点和飞控节点。The present invention proposes a rotary wing unmanned aerial vehicle system for vehicle detection and tracking, aiming at practical application scenarios, that is, automatic detection and tracking of motor vehicles. The rotary-wing drone obtains real-time video streams on the ground through the mounted visible light camera, and can transmit the video streams to the ground workstation in real time, and detect vehicle targets in the image through the on-board target detection algorithm. The ground station manually selects the vehicle targets to be tracked in the video. After that, the airborne target tracking algorithm is activated, and the rotary-wing UAV will automatically fly to track the selected target on the ground. The airborne computing device NVIADIA Jetson TX2 is used to run the Ubuntu ROS system, and the camera node, target detection node, target tracking node and flight control node are deployed and integrated on the ROS system.
图1是本发明的机载计算机ROS系统节点设计框图,无人机平台的工作流程包括如下步骤:Figure 1 is a block diagram of the node design of the airborne computer ROS system of the present invention. The workflow of the UAV platform includes the following steps:
(1)可见光摄像机采集图像数据,通过机载计算机的相机节点发布图像话题;(1) The visible light camera collects image data and publishes image topics through the camera node of the onboard computer;
(2)目标检测节点订阅图像话题,将其作为目标检测节点的输入,机载计算机根据目标检测算法计算目标坐标信息并发布目标坐标信息话题;(2) The target detection node subscribes to the image topic and uses it as the input of the target detection node. The onboard computer calculates the target coordinate information according to the target detection algorithm and publishes the target coordinate information topic;
(3)目标跟踪节点订阅坐标信息话题,机载计算机根据目标跟踪算法预测目标位置,并发布目标位置话题;(3) The target tracking node subscribes to the topic of coordinate information, and the onboard computer predicts the target location according to the target tracking algorithm, and publishes the target location topic;
(4)飞控节点订阅目标位置话题,进行坐标转换,计算出目标与飞机的距离,并依此发送飞控指令给飞控模块;(4) The flight control node subscribes to the target location topic, performs coordinate conversion, calculates the distance between the target and the aircraft, and sends flight control instructions to the flight control module accordingly;
(5)飞控模块执行指令控制无人机运动。(5) The flight control module executes instructions to control the movement of the drone.
图2是本发明的系统硬件结构框图,其中包括:本发明应用于旋翼无人机对地面机动车辆的检测和跟踪,无人机负载包括可见光摄像机、TX2机载计算机、无线图传等,TX2机载计算机上部署Ubuntu摄像机采用带自稳功能的云台相机,可拍摄1080P分辨率视频,采集速率为30FPS,云台相机固定在无人机下方,以固定角度对地面进行拍摄。算法处理单元为TX2机载计算机,安装Ubuntu16.04操作系统,ROS版本为kinetic。Figure 2 is a block diagram of the hardware structure of the system of the present invention, including: the present invention is applied to the detection and tracking of ground motor vehicles by a rotary-wing drone. The drone load includes a visible light camera, TX2 airborne computer, wireless image transmission, etc., TX2 The Ubuntu camera deployed on the airborne computer adopts a gimbal camera with self-stabilization function, which can shoot 1080P resolution video, and the acquisition rate is 30FPS. The gimbal camera is fixed under the drone and shoots the ground at a fixed angle. The algorithm processing unit is a TX2 airborne computer, with the Ubuntu16.04 operating system installed, and the ROS version is kinetic.
图3是本发明的系统软件结构框图,其中包括:相对于地面站做推理计算,TX2机载计算机的计算能力大为削减,因此基于深度学习的目标检测算法需根据算力相应调整,传统的YOLOv3目标检测模型大小为240M,计算复杂度过大,已不再适用于边缘设备。需对原始网络进行剪枝处理。YOLO Nano大小只有4.0MB左右,比Tiny YOLOv2和Tiny YOLOv3分别小了15.1倍和8.3倍,在计算上需要4.57B次推断运算,比后两个网络分别少了34%和17%,在性能表现上,在VOC2007数据集取得了69.1%的mAP,准确率比后两者分别提升了12个点和10.7个点。因此将YOLO Nano算法部署在TX2上可显著减少运算压力,且能保证目标检测的准确率。Figure 3 is a block diagram of the system software structure of the present invention, which includes: Compared with ground station for inference calculation, the computing power of TX2 airborne computer is greatly reduced. Therefore, the target detection algorithm based on deep learning needs to be adjusted according to the computing power. The traditional The size of the YOLOv3 target detection model is 240M, the calculation complexity is too large, and it is no longer suitable for edge devices. The original network needs to be pruned. The size of YOLO Nano is only about 4.0MB, which is 15.1 times and 8.3 times smaller than Tiny YOLOv2 and Tiny YOLOv3, respectively. It requires 4.57B inference operations in calculation, which is 34% and 17% less than the latter two networks. In the above, 69.1% mAP was achieved in the VOC2007 data set, and the accuracy rate was improved by 12 points and 10.7 points respectively compared with the latter two. Therefore, deploying the YOLO Nano algorithm on TX2 can significantly reduce the computational pressure and ensure the accuracy of target detection.
(1)目标检测算法设计(1) Design of target detection algorithm
利用YOLO系列单发物体检测网络架构的设计原理来创建YOLO Nano,这是一个高度紧凑的网络,具有针对该应用量身定制的高度定制的模块级宏架构和微架构设计。网络结构主要包含残差PEP宏架构和全连接注意力宏架构FCA。Use the design principles of the YOLO series single-shot object detection network architecture to create YOLO Nano, which is a highly compact network with highly customized module-level macro-architecture and micro-architecture designs tailored to the application. The network structure mainly includes the residual PEP macro architecture and the fully connected attention macro architecture FCA.
PEP由一个1*1卷积的映射层,它将输入的特征图映射到较低维度的张量,PEP(num)中的num为较低的维度;一个1*1卷积的扩张层,它会将特征图的通道再扩张到高一些的维度;一个逐深度(depth-wise)的卷积层,它会通过不同滤波器对不同的扩张层输出通道执行空间卷积;一个1*1卷积的映射层,它将前一层的输出通道映射到较低维度。前两步组合跨通道融合特征;第二步增加特征维度后,使第三步中更多的通道特征做空间的特征融合(提高特征的抽象和表征能力);第三步深度卷积部分(空间卷积);第四步逐点卷积部分(通道卷积),降低通道减少后边卷积带来的巨大计算量;后两部组成深度可分离式卷积,实现降低计算复杂度情况下保证模型的表征能力。残差PEP宏架构的使用可以显著降低架构和计算上的复杂度,同时还能保证模型的表征能力。FCA宏架构由两个全连接层组成,它们可以学习通道之间的动态、非线性内部依赖关系,并通过通道级的乘法重新加权通道的重要性。FCA的使用有助于基于全局信息关注更加具有信息量的特征,因为它再校准了一遍动态特征。这可以更有效利用神经网络的能力,即在有限参数量下尽可能表达重要信息。因此,该模块可以在修剪模型架构、降低模型复杂度、增加模型表征力之间做更好的权衡。PEP consists of a 1*1 convolutional mapping layer, which maps the input feature map to a lower-dimensional tensor. The num in PEP(num) is the lower dimension; a 1*1 convolutional expansion layer, It will expand the channel of the feature map to higher dimensions; a depth-wise convolution layer, which will perform spatial convolution on different expansion layer output channels through different filters; a 1*1 Convolutional mapping layer, which maps the output channels of the previous layer to lower dimensions. The first two steps combine cross-channel fusion features; the second step increases the feature dimension, so that more channel features in the third step are used for spatial feature fusion (to improve the abstraction and characterization capabilities of features); the third step is the deep convolution part ( Spatial convolution); the fourth step is the point-by-point convolution part (channel convolution), which reduces the huge amount of calculation caused by the reduction of the channel after the convolution; the latter two parts form a deep separable convolution, which can reduce the computational complexity Guarantee the characterization ability of the model. The use of the residual PEP macro-architecture can significantly reduce the complexity of architecture and calculations, while ensuring the characterization ability of the model. The FCA macro architecture consists of two fully connected layers, which can learn the dynamic and non-linear internal dependencies between channels, and re-weight the importance of the channels through channel-level multiplication. The use of FCA helps to focus on more informative features based on global information, because it calibrates the dynamic features again. This can make more effective use of the ability of neural networks, that is, express important information as much as possible with limited parameters. Therefore, this module can make a better trade-off between pruning the model architecture, reducing model complexity, and increasing model representation.
YOLO Nano大小只有4.0MB左右,比Tiny YOLOv2和Tiny YOLOv3分别小了15.1倍和8.3倍,在计算上需要4.57B次推断运算,比后两个网络分别少了34%和17%,在性能表现上,在VOC2007数据集取得了69.1%的mAP,准确率比后两者分别提升了12个点和10.7个点。因此将YOLO Nano算法部署在TX2上可显著减少运算压力,且能保证目标检测的准确率。The size of YOLO Nano is only about 4.0MB, which is 15.1 times and 8.3 times smaller than Tiny YOLOv2 and Tiny YOLOv3, respectively. It requires 4.57B inference operations in calculation, which is 34% and 17% less than the latter two networks. In the above, 69.1% mAP was achieved in the VOC2007 data set, and the accuracy rate was improved by 12 points and 10.7 points respectively compared with the latter two. Therefore, deploying the YOLO Nano algorithm on TX2 can significantly reduce the computational pressure and ensure the accuracy of target detection.
(2)目标跟踪算法设计(2) Target tracking algorithm design
Staple算法考虑到相关滤波用HOG特征时对运动模糊和照度很鲁棒,但是对形变不够鲁棒。若目标产生形变,整个目标的颜色分布是基本不会变的。因此 颜色直方图对形变则非常鲁棒,另一方面,颜色直方图对光照变化不鲁棒,这一点又可以由HOG特征进行互补。因此考虑分成两个通道同时使用这两种特征。使用HOG特征对相关滤波器进行学习,得到滤波模版,使用给定公式更新模版。使用颜色特征对滤波模板进行学习,然后使用给定的更新公式对学习到的模板进行更新。分别使用两个模板对目标位置进行预测再加权平均得到合成响应图,响应图中最大值的位置即为目标所在位置。虽然Staple跟踪算法克服了KCF算法的部分缺陷,但对目标遮挡和丢失仍没有可靠的解决方案,因此在Staple算法框架中加入重检测模块有效解决此问题。具体地,对下一帧目标的位置进行预估,然后再预估位置进行采样进而进一步锁定目标的位置。而卡尔曼滤波器可以建立目标体的线性运动模型,通过模型的输入值和和输出值对目标的状态进行最优估计,因此利用卡尔曼滤波器建立目标的运动模型,预测目标的下一时刻位置,相机抖动可以看成是高斯噪声。The Staple algorithm is robust to motion blur and illuminance when considering HOG features for correlation filtering, but it is not robust to deformation. If the target is deformed, the color distribution of the entire target will basically not change. Therefore, the color histogram is very robust to deformation. On the other hand, the color histogram is not robust to light changes, which can be complemented by HOG features. Therefore, consider dividing into two channels and using these two features at the same time. Use HOG features to learn related filters to obtain a filter template, and update the template using the given formula. Use the color feature to learn the filter template, and then use the given update formula to update the learned template. Two templates are used to predict the target position and then weighted and averaged to obtain a composite response map, and the position of the maximum value in the response map is the target location. Although the Staple tracking algorithm overcomes some of the shortcomings of the KCF algorithm, there is still no reliable solution to target occlusion and loss. Therefore, the re-detection module is added to the Staple algorithm framework to effectively solve this problem. Specifically, the position of the target in the next frame is estimated, and then the estimated position is sampled to further lock the position of the target. The Kalman filter can establish a linear motion model of the target body, and the state of the target can be optimally estimated through the input value and output value of the model. Therefore, the Kalman filter is used to establish the target motion model to predict the next moment of the target. Position, camera shake can be regarded as Gaussian noise.
(3)基于卡尔曼滤波器的改进(3) Improvement based on Kalman filter
目标的运动模型和观测方程可以表达成如下形式,由于两帧图像之间的采样间隔很短,所以将目标在两帧之间的运动简化为匀速运动。x和y分别表示目标的位置距离图像中心点的像素距离在u轴和v轴上的分量,
Figure PCTCN2020082257-appb-000001
Figure PCTCN2020082257-appb-000002
分别表示目标的移动速度在u轴和v轴上的分量。因为目标体运动的加速度是随机的,所以可以将
Figure PCTCN2020082257-appb-000003
Figure PCTCN2020082257-appb-000004
看成是高斯噪声。Δt为相邻时刻时间间隔。
The motion model and observation equation of the target can be expressed in the following form. Since the sampling interval between the two frames of images is very short, the motion of the target between the two frames is simplified to a uniform motion. x and y respectively represent the component of the pixel distance between the position of the target and the center of the image on the u-axis and the v-axis,
Figure PCTCN2020082257-appb-000001
with
Figure PCTCN2020082257-appb-000002
Represent the components of the target's moving speed on the u-axis and v-axis, respectively. Because the acceleration of the target body movement is random, it can be
Figure PCTCN2020082257-appb-000003
with
Figure PCTCN2020082257-appb-000004
Think of it as Gaussian noise. Δt is the time interval between adjacent moments.
Figure PCTCN2020082257-appb-000005
Figure PCTCN2020082257-appb-000005
k时刻的观测值为:The observation value at time k is:
Figure PCTCN2020082257-appb-000006
Figure PCTCN2020082257-appb-000006
其中v k为k时刻测量误差,所以系统的状态转移矩阵A、控制矩阵G、测量矩阵H分别为: Where v k is the measurement error at time k, so the state transition matrix A, control matrix G, and measurement matrix H of the system are respectively:
Figure PCTCN2020082257-appb-000007
Figure PCTCN2020082257-appb-000007
Figure PCTCN2020082257-appb-000008
Figure PCTCN2020082257-appb-000008
Figure PCTCN2020082257-appb-000009
Figure PCTCN2020082257-appb-000009
图4是本发明的改进后的跟踪算法流程图,其中包括:首先进行卡尔曼滤波器和Staple跟踪算法的各项初始化工作,接着进行图像序列中的目标跟踪。在跟踪过程中,首先是根据k-1帧中的目标状态预测k帧中目标的位置,然后在预测位置进行图像块的采样输入至Staple跟踪算法获得目标在图像中位置的测量值,接着根据Staple跟踪算法测试样本与训练样本相关性的值对目标是否有遮挡进行判断,对相关性的值设置一个阈值,若低于该阈值则判断有遮挡,若有遮挡,则将目标的预测值复制给测量值。下一步对目标测量值进行修正,最后得到对目标位置的估计值。在这过程中,每一帧目标位置修正后都对前一帧的目标状态进行更新。Fig. 4 is a flow chart of the improved tracking algorithm of the present invention, which includes: first perform the initialization work of the Kalman filter and the Staple tracking algorithm, and then perform the target tracking in the image sequence. In the tracking process, first predict the position of the target in the k frame according to the target state in the k-1 frame, and then sample the image block at the predicted position and input it to the Staple tracking algorithm to obtain the measured value of the target position in the image, and then according to Staple tracking algorithm tests the value of the correlation between the sample and the training sample to determine whether the target is occluded, and set a threshold for the correlation value. If it is lower than the threshold, it is determined that there is occlusion. If there is occlusion, the predicted value of the target is copied Give the measured value. The next step is to correct the target measurement value, and finally get the estimated value of the target position. In this process, after the target position of each frame is corrected, the target state of the previous frame is updated.

Claims (10)

  1. 一种用于车辆检测跟踪的旋翼无人机系统,该系统包括无人机平台和地面站平台,其特征在于,所述无人机平台用于实时计算并检测跟踪目标;所述地面站平台用于对目标进行跟踪视频监控,对无人机平台下发手动飞控指令。A rotary wing unmanned aerial vehicle system for vehicle detection and tracking. The system includes an unmanned aerial vehicle platform and a ground station platform, characterized in that the unmanned aerial vehicle platform is used for real-time calculation and detection and tracking of targets; the ground station platform It is used to track and monitor the target and issue manual flight control instructions to the UAV platform.
  2. 根据权利要求1所述的用于车辆检测跟踪的旋翼无人机系统,其特征在于,所述无人机平台包括:可见光摄像机、机载计算机、第一无线图传终端和飞控模块,其中,所述机载计算机分别与可见光摄像机、第一无线图传终端、飞控模块相连接;所述地面站平台包括PC机和第二无线图传终端,两者信息交互;第一无线图传终端与第二无线图传终端信息交互;The rotary wing unmanned aerial vehicle system for vehicle detection and tracking according to claim 1, wherein the unmanned aerial vehicle platform comprises: a visible light camera, an onboard computer, a first wireless image transmission terminal and a flight control module, wherein , The onboard computer is respectively connected with a visible light camera, a first wireless image transmission terminal, and a flight control module; the ground station platform includes a PC and a second wireless image transmission terminal, which exchange information; the first wireless image transmission Information interaction between the terminal and the second wireless image transmission terminal;
    所述可见光摄像机,用于采集图像数据;The visible light camera is used to collect image data;
    所述机载计算机,用于运行目标检测算法和目标跟踪算法;The onboard computer is used to run a target detection algorithm and a target tracking algorithm;
    所述第一无线图传终端,用于传输目标跟踪实时视频流和地面站手动飞控指令接收;The first wireless image transmission terminal is used to transmit a real-time video stream of target tracking and receive manual flight control instructions from a ground station;
    所述PC机,用于目标跟踪实时视频流监控和手动飞控指令下发;The PC is used for real-time video stream monitoring of target tracking and manual flight control command issuance;
    所述第二无线图传终端,用于接收目标跟踪实时视频流和手动飞控指令发送。The second wireless image transmission terminal is used to receive a real-time video stream of target tracking and send manual flight control instructions.
  3. 根据权利要求1所述的用于车辆检测跟踪的旋翼无人机系统,其特征在于,所述机载计算机运行的目标检测算法采用YOLO Nano算法。The rotary wing unmanned aerial vehicle system for vehicle detection and tracking according to claim 1, wherein the target detection algorithm run by the onboard computer adopts the YOLO Nano algorithm.
  4. 根据权利要求1所述的用于车辆检测跟踪的旋翼无人机系统,其特征在于,所述机载计算机运行的目标跟踪算法采用Staple跟踪算法。The rotary-wing unmanned aerial vehicle system for vehicle detection and tracking according to claim 1, wherein the target tracking algorithm run by the on-board computer adopts the Staple tracking algorithm.
  5. 根据权利要求1所述的用于车辆检测跟踪的旋翼无人机系统,其特征在于,所述机载计算机还包括目标重检测模块,用于根据Staple跟踪算法的测试样本与训练样本相关性的值,判断目标是否被遮挡,设置相关性的值的阈值,若低于该阈值则判断有遮挡,若有遮挡,则将目标的预测值复制给测量值,对测量值进行修正,得到目标位置的估计值。The rotary wing unmanned aerial vehicle system for vehicle detection and tracking according to claim 1, wherein the onboard computer further comprises a target re-detection module for determining the correlation between the test sample and the training sample of the Staple tracking algorithm Value, determine whether the target is occluded, set the threshold of the correlation value, if it is lower than the threshold, it will determine that there is occlusion, if there is occlusion, copy the predicted value of the target to the measured value, correct the measured value, and get the target position Estimated value.
  6. 根据权利要求1所述的用于车辆检测跟踪的旋翼无人机系统,其特征在于,所述机载计算机用于部署Ubuntu ROS操作系统,该系统包含相机节点、目标检测节点、目标跟踪节点、飞控节点;其中,相机节点用于采集图像数据,目标检测节点用于所有车辆的定位,目标跟踪节点用于对目标车辆进行跟踪,飞控节点用于旋翼无人机的飞行控制。The rotary wing drone system for vehicle detection and tracking according to claim 1, wherein the onboard computer is used to deploy the Ubuntu ROS operating system, and the system includes a camera node, a target detection node, a target tracking node, Flight control node; among them, the camera node is used to collect image data, the target detection node is used to locate all vehicles, the target tracking node is used to track the target vehicle, and the flight control node is used to control the flight of the rotary-wing UAV.
  7. 一种基于权利要求1所述的用于车辆检测跟踪的旋翼无人机系统的检测跟踪方法,其特征在于,该方法包括:A detection and tracking method for a rotary-wing UAV system for vehicle detection and tracking based on claim 1, wherein the method comprises:
    无人机平台实时计算并检测跟踪目标;地面站平台通过无线通信向无人机发送飞控指令,控制飞机飞行。The UAV platform calculates and detects and tracks the target in real time; the ground station platform sends flight control instructions to the UAV through wireless communication to control the flight of the aircraft.
  8. 根据权利要求7所述的用于车辆检测跟踪的旋翼无人机系统的检测跟踪 方法,其特征在于:所述无人机平台包括可见光摄像机、机载计算机、第一无线图传终端和飞控模块,其中,所述机载计算机分别与可见光摄像机、第一无线图传终端、飞控模块相连接,机载计算机上部署Ubuntu ROS操作系统,该系统包含用于采集图像数据的相机节点、用于所有车辆定位的目标检测节点、用于对目标车辆进行跟踪的目标跟踪节点、用于控制旋翼无人机飞行的飞控节点;所述地面站平台包括PC机和第二无线图传终端,两者信息交互;第一无线图传终端与第二无线图传终端信息交互;该方法包括如下步骤:The detection and tracking method of a rotary-wing UAV system for vehicle detection and tracking according to claim 7, wherein the UAV platform includes a visible light camera, an onboard computer, a first wireless image transmission terminal, and a flight controller. Module, wherein the onboard computer is connected to the visible light camera, the first wireless image transmission terminal, and the flight control module, and the Ubuntu ROS operating system is deployed on the onboard computer. The system includes camera nodes for collecting image data, The target detection node for all vehicle positioning, the target tracking node for tracking the target vehicle, and the flight control node for controlling the flight of the rotor drone; the ground station platform includes a PC and a second wireless image transmission terminal, The two information exchange; the first wireless image transmission terminal and the second wireless image transmission terminal information exchange; the method includes the following steps:
    (1)可见光摄像机采集图像数据,通过机载计算机的相机节点发布图像话题;(1) The visible light camera collects image data and publishes image topics through the camera node of the onboard computer;
    (2)目标检测节点订阅图像话题,将其作为目标检测节点的输入,机载计算机根据目标检测算法计算车辆坐标信息并发布车辆坐标信息话题;(2) The target detection node subscribes to the image topic and uses it as the input of the target detection node. The onboard computer calculates the vehicle coordinate information according to the target detection algorithm and publishes the topic of vehicle coordinate information;
    (3)目标跟踪节点订阅车辆坐标信息话题,机载计算机根据目标跟踪算法预测目标车辆位置,并发布目标位置话题;(3) The target tracking node subscribes to the topic of vehicle coordinate information, and the onboard computer predicts the location of the target vehicle according to the target tracking algorithm, and publishes the topic of the target location;
    (4)飞控节点订阅目标位置话题,进行坐标转换,计算出目标与无人机的距离,并依此发送飞控指令给飞控模块;(4) The flight control node subscribes to the target location topic, performs coordinate conversion, calculates the distance between the target and the UAV, and sends flight control instructions to the flight control module accordingly;
    (5)飞控模块执行指令控制无人机运动。(5) The flight control module executes instructions to control the movement of the drone.
  9. 根据权利要求8所述的用于车辆检测跟踪的旋翼无人机系统的检测跟踪方法,其特征在于,所述目标检测算法采用YOLO Nano算法。The detection and tracking method of a rotary-wing UAV system for vehicle detection and tracking according to claim 8, wherein the target detection algorithm adopts the YOLO Nano algorithm.
  10. 根据权利要求8所述的用于车辆检测跟踪的旋翼无人机系统的检测跟踪方法,其特征在于,所述目标跟踪算法具体包括如下步骤:The detection and tracking method of a rotary-wing UAV system for vehicle detection and tracking according to claim 8, wherein the target tracking algorithm specifically includes the following steps:
    (1)进行卡尔曼滤波器和Staple跟踪算法的各项初始化工作;(1) Perform the initialization of Kalman filter and Staple tracking algorithm;
    (2)进行图像序列中的目标跟踪;(2) Perform target tracking in the image sequence;
    (3)在跟踪过程中,首先是根据k-1帧中的目标状态预测k帧中目标车辆的位置,然后在预测位置进行图像块的采样输入至Staple跟踪算法获得目标车辆在图像中位置的测量值,接着根据Staple跟踪算法的测试样本与训练样本相关性的值,对目标是否有遮挡进行判断,对相关性的值设置一个阈值,若低于该阈值则判断有遮挡,若有遮挡,则将目标车辆的预测值复制给测量值;(3) In the tracking process, first predict the position of the target vehicle in frame k based on the target state in frame k-1, and then sample the image block at the predicted position and input it to the Staple tracking algorithm to obtain the position of the target vehicle in the image. Measure the value, and then judge whether the target is occluded according to the correlation value between the test sample of the Staple tracking algorithm and the training sample. Set a threshold for the correlation value. If it is lower than the threshold, determine that there is occlusion. If there is occlusion, Copy the predicted value of the target vehicle to the measured value;
    (4)对目标测量值进行修正,得到对目标车辆位置的估计值;(4) Correct the target measurement value to obtain the estimated value of the target vehicle position;
    (5)对前一帧的目标状态进行更新。(5) Update the target state of the previous frame.
PCT/CN2020/082257 2020-03-24 2020-03-31 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method WO2021189507A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010212643.8A CN111476116A (en) 2020-03-24 2020-03-24 Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method
CN202010212643.8 2020-03-24

Publications (1)

Publication Number Publication Date
WO2021189507A1 true WO2021189507A1 (en) 2021-09-30

Family

ID=71748379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082257 WO2021189507A1 (en) 2020-03-24 2020-03-31 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method

Country Status (2)

Country Link
CN (1) CN111476116A (en)
WO (1) WO2021189507A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049573A (en) * 2021-11-09 2022-02-15 上海建工四建集团有限公司 Method for supervising safety construction of village under construction residence
CN114155511A (en) * 2021-12-13 2022-03-08 吉林大学 Environmental information acquisition method for automatically driving automobile on public road
CN114545965A (en) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 A UAV polder piping inspection system and method based on deep learning
CN114612825A (en) * 2022-03-09 2022-06-10 云南大学 Target detection method based on edge equipment
CN114859967A (en) * 2022-04-24 2022-08-05 北京同创信通科技有限公司 Intelligent scrap steel quality testing system and method for automatically controlling unmanned aerial vehicle
CN114879744A (en) * 2022-07-01 2022-08-09 浙江大学湖州研究院 Night work unmanned aerial vehicle system based on machine vision
CN114882450A (en) * 2022-04-13 2022-08-09 南京大学 Method for detecting reversing behavior of high-speed ramp junction under unilateral cruising of unmanned aerial vehicle
CN114897935A (en) * 2022-05-13 2022-08-12 中国科学技术大学 Unmanned aerial vehicle tracking method and system for air target object based on virtual camera
CN114900654A (en) * 2022-04-02 2022-08-12 北京斯年智驾科技有限公司 Real-time monitoring video transmission system for autonomous vehicles
CN114973033A (en) * 2022-05-30 2022-08-30 青岛科技大学 Unmanned aerial vehicle automatic target detection and tracking method
CN115077549A (en) * 2022-06-16 2022-09-20 南昌智能新能源汽车研究院 Vehicle state tracking method, system, computer and readable storage medium
CN115268506A (en) * 2022-01-18 2022-11-01 中国人民解放军海军工程大学 Unmanned aircraft photoelectric cooperative tracking control method, system, terminal and medium
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm
CN115865939A (en) * 2022-11-08 2023-03-28 燕山大学 Edge cloud collaborative decision-making-based target detection and tracking system and method
CN115908475A (en) * 2023-03-09 2023-04-04 四川腾盾科技有限公司 Method and system for realizing image pre-tracking function of airborne photoelectric reconnaissance pod
CN116068928A (en) * 2022-11-23 2023-05-05 北京航天自动控制研究所 Distributed heterogeneous unmanned aerial vehicle cluster air-ground integrated control system and method
CN116493735A (en) * 2023-06-29 2023-07-28 武汉纺织大学 A real-time tracking method for moving spatter during 10,000-watt ultra-high power laser welding
CN116703975A (en) * 2023-06-13 2023-09-05 武汉天进科技有限公司 Intelligent target image tracking method for unmanned aerial vehicle
CN116778360A (en) * 2023-06-09 2023-09-19 北京科技大学 Ground target positioning method and device for flapping-wing flying robot
CN116805195A (en) * 2023-05-25 2023-09-26 南京航空航天大学 A collaborative reasoning method and system for UAV swarms based on model segmentation
CN117132914A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for identifying large model of universal power equipment
CN118584953A (en) * 2024-05-20 2024-09-03 南京农业大学 Harvester-grain transport vehicle dual-mode switching collaborative grain unloading system and method based on harvester unloading port identification and tracking
CN119225426A (en) * 2024-11-28 2024-12-31 北京航空航天大学 Anti-unmanned aerial vehicle system and method based on air-to-air visual recognition

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950671B (en) * 2020-08-06 2024-02-13 中国人民解放军32146部队 Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle
CN111932588B (en) * 2020-08-07 2024-01-30 浙江大学 A tracking method for airborne UAV multi-target tracking system based on deep learning
CN112163628A (en) * 2020-10-10 2021-01-01 北京航空航天大学 Method for improving target real-time identification network structure suitable for embedded equipment
CN112734800A (en) * 2020-12-18 2021-04-30 上海交通大学 Multi-target tracking system and method based on joint detection and characterization extraction
CN112770272B (en) * 2021-01-11 2022-02-25 四川泓宝润业工程技术有限公司 Unmanned aerial vehicle and multi-platform data transmission device
CN112907634B (en) * 2021-03-18 2023-06-20 沈阳理工大学 UAV-based vehicle tracking method
CN113808161B (en) * 2021-08-06 2024-03-15 航天时代飞鹏有限公司 Vehicle-mounted multi-rotor unmanned aerial vehicle tracking method based on machine vision
CN113949826B (en) * 2021-09-28 2024-11-05 航天时代飞鸿技术有限公司 A method and system for cooperative reconnaissance of drone clusters under limited communication bandwidth conditions
CN114815866A (en) * 2022-04-14 2022-07-29 哈尔滨工业大学人工智能研究院有限公司 Switching control method for temporary loss target unmanned aerial vehicle with stability guarantee
CN115514787B (en) * 2022-09-16 2023-06-27 北京邮电大学 Intelligent unmanned aerial vehicle assisted decision-making planning method and device for Internet of Vehicles environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212316A1 (en) * 2015-01-23 2019-07-11 Airscout Inc. Methods and systems for analyzing a field
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110610512A (en) * 2019-09-09 2019-12-24 西安交通大学 UAV target tracking method based on BP neural network fusion Kalman filter algorithm

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355574B (en) * 2011-10-17 2013-12-25 上海大学 Image stabilizing method of airborne tripod head moving target autonomous tracking system
CN106289186B (en) * 2016-09-21 2019-04-19 南京航空航天大学 Rotor UAV airborne visual detection and multi-target positioning system and implementation method
CN106981073B (en) * 2017-03-31 2019-08-06 中南大学 A method and system for real-time tracking of ground moving targets based on UAV
CN107128492B (en) * 2017-05-05 2019-09-20 成都通甲优博科技有限责任公司 A kind of unmanned plane tracking, device and unmanned plane based on number of people detection
CN109002059A (en) * 2017-06-06 2018-12-14 武汉小狮科技有限公司 A kind of multi-rotor unmanned aerial vehicle object real-time tracking camera system and method
CN109445453A (en) * 2018-09-12 2019-03-08 湖南农业大学 A kind of unmanned plane Real Time Compression tracking based on OpenCV
CN109785363A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第五十二研究所 A kind of unmanned plane video motion Small object real-time detection and tracking
CN109816698B (en) * 2019-02-25 2023-03-24 南京航空航天大学 Unmanned aerial vehicle visual target tracking method based on scale self-adaptive kernel correlation filtering
CN110058610A (en) * 2019-05-07 2019-07-26 南京信息工程大学 A kind of auxiliary of real-time inspection flock of sheep number is put sheep out to pasture method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212316A1 (en) * 2015-01-23 2019-07-11 Airscout Inc. Methods and systems for analyzing a field
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110610512A (en) * 2019-09-09 2019-12-24 西安交通大学 UAV target tracking method based on BP neural network fusion Kalman filter algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDER WONG; MAHMOUD FAMUORI; MOHAMMAD JAVAD SHAFIEE; FRANCIS LI; BRENDAN CHWYL; JONATHAN CHUNG: "YOLO Nano: a Highly Compact You Only Look Once Convolutional Neural Network for Object Detection", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 3 October 2019 (2019-10-03), 201 Olin Library Cornell University Ithaca, NY 14853, XP081501523 *
ZHAO CHANG: "Research on Target Tracking Technology Based on Multi-Rotor UAV", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 February 2019 (2019-02-15), CN, XP055852868, ISSN: 1674-0246 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049573A (en) * 2021-11-09 2022-02-15 上海建工四建集团有限公司 Method for supervising safety construction of village under construction residence
CN114155511A (en) * 2021-12-13 2022-03-08 吉林大学 Environmental information acquisition method for automatically driving automobile on public road
CN114545965A (en) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 A UAV polder piping inspection system and method based on deep learning
CN114545965B (en) * 2021-12-31 2024-09-06 中国人民解放军国防科技大学 Unmanned plane levee piping inspection system and method based on deep learning
CN115268506A (en) * 2022-01-18 2022-11-01 中国人民解放军海军工程大学 Unmanned aircraft photoelectric cooperative tracking control method, system, terminal and medium
CN114612825A (en) * 2022-03-09 2022-06-10 云南大学 Target detection method based on edge equipment
CN114612825B (en) * 2022-03-09 2024-03-19 云南大学 Target detection method based on edge equipment
CN114900654A (en) * 2022-04-02 2022-08-12 北京斯年智驾科技有限公司 Real-time monitoring video transmission system for autonomous vehicles
CN114900654B (en) * 2022-04-02 2024-01-30 北京斯年智驾科技有限公司 Real-time monitoring video transmission system for automatic driving vehicle
CN114882450A (en) * 2022-04-13 2022-08-09 南京大学 Method for detecting reversing behavior of high-speed ramp junction under unilateral cruising of unmanned aerial vehicle
CN114859967A (en) * 2022-04-24 2022-08-05 北京同创信通科技有限公司 Intelligent scrap steel quality testing system and method for automatically controlling unmanned aerial vehicle
CN114897935A (en) * 2022-05-13 2022-08-12 中国科学技术大学 Unmanned aerial vehicle tracking method and system for air target object based on virtual camera
CN114973033A (en) * 2022-05-30 2022-08-30 青岛科技大学 Unmanned aerial vehicle automatic target detection and tracking method
CN114973033B (en) * 2022-05-30 2024-03-01 青岛科技大学 Unmanned aerial vehicle automatic detection target and tracking method
CN115077549B (en) * 2022-06-16 2024-04-26 南昌智能新能源汽车研究院 Vehicle state tracking method, system, computer and readable storage medium
CN115077549A (en) * 2022-06-16 2022-09-20 南昌智能新能源汽车研究院 Vehicle state tracking method, system, computer and readable storage medium
CN114879744B (en) * 2022-07-01 2022-10-04 浙江大学湖州研究院 Night work unmanned aerial vehicle system based on machine vision
CN114879744A (en) * 2022-07-01 2022-08-09 浙江大学湖州研究院 Night work unmanned aerial vehicle system based on machine vision
CN115712354B (en) * 2022-07-06 2023-05-30 成都戎盛科技有限公司 Man-machine interaction system based on vision and algorithm
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm
CN115865939A (en) * 2022-11-08 2023-03-28 燕山大学 Edge cloud collaborative decision-making-based target detection and tracking system and method
CN115865939B (en) * 2022-11-08 2024-05-10 燕山大学 Target detection and tracking system and method based on edge cloud collaborative decision
CN116068928A (en) * 2022-11-23 2023-05-05 北京航天自动控制研究所 Distributed heterogeneous unmanned aerial vehicle cluster air-ground integrated control system and method
CN115908475A (en) * 2023-03-09 2023-04-04 四川腾盾科技有限公司 Method and system for realizing image pre-tracking function of airborne photoelectric reconnaissance pod
CN115908475B (en) * 2023-03-09 2023-05-19 四川腾盾科技有限公司 Implementation method and system for airborne photoelectric reconnaissance pod image pre-tracking function
CN116805195A (en) * 2023-05-25 2023-09-26 南京航空航天大学 A collaborative reasoning method and system for UAV swarms based on model segmentation
CN116778360A (en) * 2023-06-09 2023-09-19 北京科技大学 Ground target positioning method and device for flapping-wing flying robot
CN116778360B (en) * 2023-06-09 2024-03-19 北京科技大学 Ground target positioning method and device for flapping-wing flying robot
CN116703975B (en) * 2023-06-13 2023-12-15 武汉天进科技有限公司 Intelligent target image tracking method for unmanned aerial vehicle
CN116703975A (en) * 2023-06-13 2023-09-05 武汉天进科技有限公司 Intelligent target image tracking method for unmanned aerial vehicle
CN116493735A (en) * 2023-06-29 2023-07-28 武汉纺织大学 A real-time tracking method for moving spatter during 10,000-watt ultra-high power laser welding
CN116493735B (en) * 2023-06-29 2023-09-12 武汉纺织大学 Real-time tracking method for motion splash in Wanwave-level ultra-high power laser welding process
CN117132914A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for identifying large model of universal power equipment
CN117132914B (en) * 2023-10-27 2024-01-30 武汉大学 General power equipment identification large model method and system
CN118584953A (en) * 2024-05-20 2024-09-03 南京农业大学 Harvester-grain transport vehicle dual-mode switching collaborative grain unloading system and method based on harvester unloading port identification and tracking
CN119225426A (en) * 2024-11-28 2024-12-31 北京航空航天大学 Anti-unmanned aerial vehicle system and method based on air-to-air visual recognition

Also Published As

Publication number Publication date
CN111476116A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2021189507A1 (en) Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
Rohan et al. Convolutional neural network-based real-time object detection and tracking for parrot AR drone 2
Lee et al. Real-time, cloud-based object detection for unmanned aerial vehicles
CN102538782B (en) Helicopter landing guide device and method based on computer vision
CN109242003B (en) Vehicle-mounted vision system self-motion determination method based on deep convolutional neural network
CN103778645B (en) Circular target real-time tracking method based on images
CN110580713A (en) Satellite Video Target Tracking Method Based on Fully Convolutional Siamese Network and Trajectory Prediction
CN107943064A (en) A kind of unmanned plane spot hover system and method
CN102456226B (en) Tracking methods for regions of interest
CN111307291B (en) Method, device and system for detecting and locating abnormal surface temperature based on UAV
CN104200494A (en) Real-time visual target tracking method based on light streams
CN108829136A (en) The a wide range of synergic monitoring method and apparatus of unmanned aerial vehicle group
CN108803655A (en) A kind of UAV Flight Control platform and method for tracking target
CN105334347A (en) Particle image velocimetry system and method based on unmanned plane
Valenti et al. An autonomous flyer photographer
CN104820435A (en) Quadrotor moving target tracking system based on smart phone and method thereof
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
CN107145167A (en) A kind of video target tracking method based on digital image processing techniques
CN116824080A (en) Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion
CN113392723A (en) Unmanned aerial vehicle forced landing area screening method, device and equipment based on artificial intelligence
Qin et al. Visual-based tracking and control algorithm design for quadcopter UAV
WO2022198508A1 (en) Lens abnormality prompt method and apparatus, movable platform, and readable storage medium
CN113111721B (en) Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving
CN116243725A (en) Substation drone inspection method and system based on visual navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927289

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20927289

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20927289

Country of ref document: EP

Kind code of ref document: A1