[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117928570A - Path planning method for dynamic target identification and tracking of miniature unmanned vehicle - Google Patents

Path planning method for dynamic target identification and tracking of miniature unmanned vehicle Download PDF

Info

Publication number
CN117928570A
CN117928570A CN202311705930.2A CN202311705930A CN117928570A CN 117928570 A CN117928570 A CN 117928570A CN 202311705930 A CN202311705930 A CN 202311705930A CN 117928570 A CN117928570 A CN 117928570A
Authority
CN
China
Prior art keywords
unmanned vehicle
path planning
tracking
planning method
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311705930.2A
Other languages
Chinese (zh)
Inventor
秦晓驹
马赛
张月明
金晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fuyun Intelligent Technology Co ltd
Original Assignee
Shanghai Fuyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fuyun Intelligent Technology Co ltd filed Critical Shanghai Fuyun Intelligent Technology Co ltd
Priority to CN202311705930.2A priority Critical patent/CN117928570A/en
Publication of CN117928570A publication Critical patent/CN117928570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a path planning method for identifying and tracking a dynamic target of a miniature unmanned vehicle, which comprises the following steps: collecting and marking an operator data set; training through a target detection network to obtain a target detection model, collecting environment information, and detecting and identifying operators; performing target tracking on operators by adopting a deep learning method and outputting tracking tracks; establishing and positioning a miniature unmanned vehicle; calculating a navigation target point of the miniature unmanned vehicle; and (5) path planning. The beneficial effects of the invention are as follows: the simple and efficient tracking method has the advantages that the path planning for identifying and tracking the dynamic target of the miniature unmanned vehicle is realized, the automatic driving of the miniature unmanned vehicle is realized, the problem that the unmanned vehicle moves along under the dynamic environment is solved, the problem that a driver is required in the prior art is solved, the cost reduction and efficiency enhancement are realized, an efficient path can be rapidly and accurately planned under various scenes, and the efficient path has the capability of coping with the dynamic change of the scene.

Description

Path planning method for dynamic target identification and tracking of miniature unmanned vehicle
Technical Field
The invention relates to the technical field of unmanned vehicle automatic following, in particular to a path planning method for dynamic target identification and tracking of a miniature unmanned vehicle.
Background
In recent years, unmanned vehicles have integrated more and more technologies, and unmanned vehicles have also become increasingly diversified. At present, an intelligent accompanying operation trolley is a hot field.
The following method based on target detection uses global information, and the detection speed is very fast, so that the processing requirements of specific occasions can be met. The target detection algorithm based on deep learning can be roughly divided into two types according to the algorithm flow characteristics: two-Stage (Two-Stage) target detection algorithm and One-Stage (One-Stage) target detection algorithm. The accuracy of the two-stage target detection algorithm is higher, but the detection speed is slower; the One-Stage target detection algorithm has the advantages of general accuracy, rapid detection speed, high efficiency, flexibility and good generalization performance, and is widely applied in industry.
In the field of robots, scale maps are often used for positioning and mapping, positioning and simultaneous positioning and mapping, topology maps are often used for path planning, and semantic maps are often used for human-machine interaction. The path planning part belongs to a control or decision part in the unmanned vehicle architecture system, the performance of the path planning module is directly related to the running path selection quality and the running smoothness of the vehicle, the performance of the path planning algorithm is greatly dependent on the running quality of the planning algorithm, and the Two-Stage (Two-Stage) target detection algorithm and the single-Stage (One-Stage) target detection algorithm cannot achieve rapid and accurate path planning at the same time. Therefore, how to rapidly and accurately plan a high-efficiency path under various scenes and provide the capability of coping with scene dynamic changes is a problem that should be solved by the path planning algorithm.
Disclosure of Invention
The invention aims to provide a path planning method for identifying and tracking a dynamic target of a miniature unmanned vehicle, which aims to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a path planning method for dynamic target identification and tracking of a miniature unmanned vehicle comprises the following steps:
Step one, guiding a miniature unmanned vehicle to move by an operator, collecting various posture diagrams of the operator through a camera in the moving process, and marking targets to manufacture a training data set;
Inputting the data in the training data set into a target detection network for training to obtain a trained target detection model, acquiring environmental information by using a camera to obtain a video stream, and then detecting and identifying operators by using the target detection model;
Predicting the size and the position of the operator according to the acquired video stream, and carrying out target tracking on the operator by adopting a deep learning method and outputting a tracking track;
Step four, establishing a real-time environment grid map and marking the position of the micro unmanned vehicle in the map;
fifthly, marking a navigation target point according to the position of the operator in the map in the fourth step, and calculating the navigation target point of the miniature unmanned vehicle;
And step six, carrying out path planning on the miniature unmanned vehicle according to the position of the miniature unmanned vehicle in the map and the navigation target point by adopting a path planning algorithm.
Further preferably, the object detection network in the second step is YOLOv object detection network, and the camera is an RGB-D camera.
Further preferably, the YOLOv target detection network includes an input module, a backup module, a Neck module, and a Prediction module, and the adjustment measure of the input module includes using a Mosaic to enhance data, enrich a dataset, reduce GPU usage, and adopt an adaptive Anchor calculation mode.
Further preferably, the method for performing target tracking by the deep learning method in the third step is as follows:
S1, acquiring a detection frame and a corresponding detection score through a detector, classifying the detection frame into a high confidence coefficient group if the score is higher than a set high threshold value alpha, and classifying the detection frame into a low confidence coefficient group if the score is lower than alpha and higher than a set low threshold value beta;
S2, performing association matching on the tracks by using the similarity between the detection frame and the Kalman filtering estimation result, then performing matching by adopting a Hungary algorithm based on the similarity, and reserving a high-confidence detection frame which is not matched with the tracks and the tracks which are not matched with the detection frame;
S3, associating the left tracks and the low confidence detection frames, reserving tracks which are not matched with the boundary frames after the second matching, deleting the boundary frames which are not matched with the corresponding tracks after the second matching in the low confidence boundary frames, and storing the high confidence boundary frames which are not matched with the corresponding tracks as new tracks;
s4, initializing a detection frame which is not matched with the two times of matching into a new track.
Further preferably, the high threshold α is 0.8, the low threshold β is 0.1, ioU is used as a similarity measure in S2, ioU is a criterion for measuring the accuracy of detecting the corresponding object in a specific dataset, and this criterion is used to measure the correlation between reality and prediction, the higher the correlation, the higher the value.
Still preferably, in the fourth step, the micro unmanned vehicle is provided with a single-line laser radar, an RGB-D camera, an ultrasonic ranging sensor, a memory and a screen, wherein the single-line laser radar, the RGB-D camera and the ultrasonic ranging sensor are used for sensing surrounding environment information, the memory is used for storing a computer program, and the screen can display the sensed environment information.
Further preferably, the construction of the environmental grid map in the fourth step is based on real-time positioning and environmental grid map construction by a single-line laser radar and IMU design laser inertia SLAM algorithm.
Further preferably, the method for calculating the navigation target point of the mini-size unmanned aerial vehicle in the fifth step is to calculate a path combination with the shortest running distance from the initial position of the mini-size unmanned aerial vehicle to the position P (x p,yp) of the navigation target point through a shortest path algorithm, so as to obtain the expected navigation end point.
Further preferably, the path planning algorithm in the step six includes an intelligent search algorithm, an artificial intelligence algorithm, a geometric model algorithm and a local obstacle avoidance algorithm. The intelligent search algorithm is an artificial intelligent search algorithm, is the essence of the current intelligent calculation method, and the search technology is a common problem solving method and is used for solving the search problem in a given search space and realizing the minimization of the dissipation value of a path and the dissipation value in the search process; the design principle of the artificial intelligence algorithm is that firstly, a mathematical method is used for converting a problem into a computable form, and then the problem is solved by a computer, so that the artificial intelligence algorithm is based on the structure of a neural network, realizes automatic pattern recognition and decision through a large amount of data training, and can process very complex tasks and problems; adopting an A-algorithm based on a geometric model algorithm; the algorithm for local obstacle avoidance is an algorithm for avoiding collision with an obstacle in an intelligent system such as an autonomous mobile robot or an autonomous driving vehicle, and comprises a reflective obstacle avoidance algorithm, a planning obstacle avoidance algorithm and a sensing obstacle avoidance algorithm.
Further preferably, the geometric model-based algorithm adopts an a-algorithm, namely an a-Star algorithm, which is a direct search method for solving the shortest path in a static road network, so that the global path planning of the micro unmanned vehicle can be rapidly planned.
The beneficial effects are that: according to the path planning method for identifying and tracking the dynamic target oriented to the micro unmanned vehicle, by using the YOLOv target detection model, detection and identification are carried out on operators, so that follow-up tracking is facilitated; the method comprises the steps of tracking an operator through a deep learning method, outputting a tracking track, removing a background from a low-resolution detection result while maintaining the high-resolution detection result by utilizing the similarity between a detection frame and the tracking track, and excavating a real object, so that omission detection is reduced, and the track continuity is improved, and the method is a simple and efficient tracking method; the invention constructs an environment grid map for path planning and global positioning, and realizes the positioning perception of the micro unmanned vehicle; the invention uses an A-type algorithm with better performance and accuracy to carry out path planning, and utilizes a teb open source framework to realize local path planning; the invention realizes the path planning of the dynamic target identification tracking of the micro unmanned vehicle, realizes the automatic driving of the micro unmanned vehicle, solves the problem that the unmanned vehicle moves along under the dynamic environment, solves the problem that the driver is required in the prior art, reduces the consumption of manpower and material resources, effectively reduces the cost, improves the efficiency, and can rapidly and accurately plan a high-efficiency path under various scenes and ensure that the path has the capability of coping with the dynamic change of the scenes.
Drawings
Fig. 1 is a schematic flow chart of a path planning method for identifying and tracking dynamic targets of a micro unmanned vehicle according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a network structure of YOLOv target detection algorithms disclosed in an embodiment of the present invention;
fig. 3 is a schematic diagram of a specific operation flow of object tracking by a deep learning method according to an embodiment of the present invention.
Detailed Description
The following are specific embodiments of the present invention and the technical solutions of the present invention will be further described with reference to the accompanying drawings, but the present invention is not limited to these embodiments.
As shown in fig. 1-3, a path planning method for identifying and tracking a dynamic target for a micro unmanned vehicle mainly comprises six parts: the method comprises the steps of collecting and marking an operator data set, detecting and identifying operators, tracking detection targets, mapping and positioning a miniature unmanned vehicle, calculating navigation target points and planning a path of the miniature unmanned vehicle. The method uses Labelme software to mark the acquired image on the marking of the dataset; on detection and identification of operators, using a YOLOv target detection model based on deep learning, loading data obtained in a pre-step into a pre-trained YOLOv target detection network for fine adjustment, and performing target detection; in the aspect of tracking a detected object, we propose a simple and efficient tracking method, namely a deep learning method, which is the biggest difference from the existing tracking algorithm in that, instead of simply removing the low-resolution detection result, the similarity between a detection frame and a tracking track is utilized, the background is removed from the low-resolution detection result while the high-resolution detection result is maintained, and a real object is excavated, wherein the real object comprises difficult samples such as shielding, blurring and the like, so that the omission is reduced and the consistency of the track is improved. For the map building and positioning of the miniature unmanned vehicle, a single-line laser radar and an RGB-D depth camera are used as main modes for sensing environment information, and then the positioning of the miniature unmanned vehicle and the construction of an environment grid map are carried out simultaneously; after the environment grid map and the target positioning of the operator are obtained, the navigation target point of the miniature unmanned vehicle is further calculated according to the information such as the environment, the operation requirement and the like; on the path planning of the micro unmanned vehicle, an environment model is built through sensor data, then global path planning is carried out by using an A-type algorithm in a heuristic search algorithm, and finally a collision-free path of the micro unmanned vehicle is obtained.
The path planning method for dynamic target identification and tracking of the miniature unmanned vehicle comprises the following steps:
Firstly, preparing early-stage data for target detection, guiding a miniature unmanned vehicle to move by adopting an operator, collecting various posture diagrams of the operator through a camera in the moving process, and marking the targets to manufacture a training data set;
Inputting data in the training data set into YOLOv target detection network for training, enabling the model to be capable of identifying the characteristics of the operator in a targeted manner, obtaining a trained YOLOv target detection model, acquiring environmental information by using an RGB-D camera, obtaining a video stream, and then detecting and identifying the operator by using the YOLOv target detection model;
Step three, according to the collected video stream, a single target is framed on an initial frame of the video, then the size and the position of an operator are predicted, and a deep learning method is adopted to track the target of the operator and output a tracking track;
Sensing environment information through a single-line laser radar and an RGB-D camera on the miniature unmanned vehicle, performing real-time positioning and environment grid map construction based on the single-line laser radar and an IMU design laser inertia SLAM algorithm, establishing a real-time environment grid map and marking the position of the miniature unmanned vehicle in the map;
Step five, determining the position of the operator in the real-time environment grid map in the step four according to the point cloud information obtained by the RGB-D camera, the detection and identification result of the operator in the step two and the real-time environment grid map in the step four, marking a navigation target point, and further calculating the navigation target point of the micro unmanned vehicle;
And step six, performing global path planning based on an A-x algorithm of a two-dimensional map, realizing local path optimization by utilizing a teb frame, determining key path planning parameters such as a path control point, speed, acceleration, movement direction and the like according to the walking track of staff and combining vehicle kinematics information, generating the movement track of the micro-unmanned vehicle according to the position of the micro-unmanned vehicle in the map and a navigation target point through Kalman filtering, realizing path planning of the micro-unmanned vehicle, and transmitting the path planning to a micro-unmanned vehicle movement control module to finish the behavior control of the micro-unmanned vehicle.
In the application, YOLOv target detection network comprises an input end module, a backup module, a Neck module and a Prediction module, wherein the adjustment measure of the input end module comprises data enhancement by utilizing a Mosaic, enriching a data set, reducing GPU use and adopting an adaptive Anchor calculation mode. The YOLOv method for extracting the characteristics of the staff by the target detection network comprises the following steps: a CBS structure and a BottleNeck structure are used in the back bone module, the CBS structure is formed by connecting Conv layers, batch Normalization layers and SiLU layers in series, the Conv layers used in the CBS comprise two kinds of 1X 1 convolution layers and 3X 3 convolution layers, and normalization processing is carried out on data; normalizing each neuron through Batch Normalization layers or normalizing only one neuron, using Batch Normalization in convolution, and treating the whole feature map as one neuron by utilizing a weight sharing strategy; the self-stabilizing properties of the structure are ensured by SiLU, wherein SiLU (x) =x·sigmoid (x).
In the application, the method for carrying out target tracking by the deep learning method comprises the following steps:
S1, acquiring a detection frame and a corresponding detection score through a detector, classifying the detection frame into a high confidence coefficient group if the score is higher than a set high threshold value alpha (0.8), and classifying the detection frame into a low confidence coefficient group if the score is lower than alpha and higher than a set low threshold value beta (0.1);
S2, carrying out association matching on the tracks by using the similarity between the detection frame and the Kalman filtering estimation result, adopting IoU as similarity measurement, then carrying out matching by adopting a Hungary algorithm based on the similarity, and reserving a high-confidence detection frame which is not matched with the tracks and the tracks which are not matched with the detection frame;
S3, associating the rest tracks and the low confidence detection frames for the first time, reserving tracks which are not matched with the boundary frames after the second time of matching, and deleting the boundary frames which are not found with the corresponding tracks after the second time of matching in the low confidence boundary frames, wherein the boundary frames are generally the backgrounds which do not contain any object; ioU alone as similarity in the second association, since low-score detection boxes typically contain severe occlusion or motion blur and appearance features are unreliable, when we do not use appearance similarity in the second association, those high-confidence bounding boxes that do not match to the corresponding tracks are saved as emerging tracks;
s4, initializing a detection frame which is not matched with the two times of matching into a new track.
The specific operation flow of target tracking by the deep learning method is as follows:
1) Obtaining a detection algorithm result, and constructing strack;
2) Dividing strack according to a set low threshold;
3) Integrating the current track and the lost track to obtain a track-list, and performing association matching by using Kalman filtering;
4) Calculating IoU similarity of the track-list and strack high-confidence groups;
5) Filtering IoU parts with smaller similarity, matching by using a Hungary algorithm, and adding part tracks meeting the conditions into a track-list;
6) Calculating IoU similarity between the part failed in the matching in the last step and strack low-confidence coefficient group, and performing Hungary matching;
7) Adding part of tracks meeting the conditions into track-list;
8) Adding a part with failed matching twice into a lost-track;
9) Calculating IoU similarity between the lost-track and the high confidence group and performing Hungary matching;
10 Adding part of the track meeting the condition into the track-list;
11 A) deleting part of the track which is still not added with the track-list;
12 Calculating track-list and retaining the longest track therein;
13 Outputting the final trajectory.
In the application, a single-line laser radar, an RGB-D camera and an ultrasonic ranging sensor are arranged on the miniature unmanned aerial vehicle. The single-line laser radar, the RGB-D camera and the ultrasonic ranging sensor are used as main modes for sensing environmental information of the miniature unmanned vehicle, and in the positioning of the miniature unmanned vehicle and the construction of an environmental grid map, the capability of sensing the environmental information determines the positioning precision of the miniature unmanned vehicle and the intelligent level of decision. The single-line laser radar can realize the single-plane 360-degree long-distance obstacle sensing, and provides obstacle information for path planning and obstacle avoidance; the RGB-D camera can acquire point cloud information of a three-dimensional space in a complex environment, and the perception result is used for path planning and emergency obstacle avoidance, so that the defect that the single-line laser radar cannot perceive the three-dimensional space information can be overcome; the ultrasonic ranging sensor is required to be installed around the miniature unmanned vehicle according to a certain position and angle and used for sensing the close-range obstacle, and as the sensing range of the ultrasonic ranging sensor is in a cone angle shape, whether the obstacle exists in the three-dimensional space in the close-range or not can be sensed and used for emergency obstacle avoidance. The construction of the environment grid map is based on the single-line laser radar and the IMU design laser inertia SLAM algorithm to carry out real-time positioning and environment grid map construction, and the constructed environment grid map is used for path planning and global positioning. By utilizing the perception information, the application can construct a dynamic map surrounding the micro unmanned vehicle, fuse the dynamic map data with the motion track of the micro unmanned vehicle, and further construct a global map based on the historical track of the micro unmanned vehicle so that all objects are in a unified and inheritable coordinate system;
In the method for calculating the navigation target point of the micro-unmanned vehicle, a path combination with the shortest running distance from the initial position of the micro-unmanned vehicle to the position P (x p,yp) of the navigation target point is calculated through a shortest path algorithm, and finally, the expected navigation terminal point is obtained.
In the application, path planning of the micro unmanned vehicle refers to the process of bypassing all barriers in a space in a configuration space or a Cartesian space of the micro unmanned vehicle, searching a collision-free space path, wherein the obtained path is a space point sequence and is described by a simple geometric relationship. The path planning algorithm comprises an intelligent search algorithm, an artificial intelligence algorithm, a geometric model algorithm and a local obstacle avoidance algorithm. The A-algorithm in the geometric model algorithm is widely applied to global path planning of the mobile platform; the path planning algorithm adopted by the application is based on an A-type algorithm of a two-dimensional grid map to make global path planning, and utilizes a teb frame to realize local path optimization, and key path planning parameters such as path control points, speed, acceleration and the like are determined according to the walking track of staff, and the movement track of the unmanned vehicle is generated after Kalman filtering and is sent to a movement control module to complete the behavior control of the miniature unmanned vehicle.
In the application, the micro unmanned aerial vehicle is also provided with a mobile chassis, an industrial personal computer, a memory and a screen, wherein the memory stores a computer program, and the processor calls the computer program to execute the dynamic target identification tracking and path planning method facing the micro unmanned aerial vehicle, and the planning capability required by the micro unmanned aerial vehicle is global path planning of the mobile chassis.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the summary of the present invention within the scope of the present invention.

Claims (10)

1. A path planning method for dynamic target identification and tracking of a miniature unmanned vehicle is characterized by comprising the following steps: the method comprises the following steps:
Step one, guiding a miniature unmanned vehicle to move by an operator, collecting various posture diagrams of the operator through a camera in the moving process, and marking targets to manufacture a training data set;
Inputting the data in the training data set into a target detection network for training to obtain a trained target detection model, acquiring environmental information by using a camera to obtain a video stream, and then detecting and identifying operators by using the target detection model;
Predicting the size and the position of the operator according to the acquired video stream, and carrying out target tracking on the operator by adopting a deep learning method and outputting a tracking track;
Step four, establishing a real-time environment grid map and marking the position of the micro unmanned vehicle in the map;
fifthly, marking a navigation target point according to the position of the operator in the map in the fourth step, and calculating the navigation target point of the miniature unmanned vehicle;
And step six, carrying out path planning on the miniature unmanned vehicle according to the position of the miniature unmanned vehicle in the map and the navigation target point by adopting a path planning algorithm.
2. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 1, wherein the path planning method comprises the following steps: the target detection network in the second step is YOLOv target detection network, and the camera is an RGB-D camera.
3. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 2, wherein the path planning method is characterized in that: the YOLOv target detection network comprises an input end module, a backup module, a Neck module and a Prediction module, and the adjustment measure of the input end module comprises the steps of utilizing a Mosaic to enhance data, enrich a data set, reduce GPU use and adopt a self-adaptive Anchor computing mode.
4. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 1, wherein the path planning method comprises the following steps: the method for tracking the target by the deep learning method in the third step comprises the following steps:
S1, acquiring a detection frame and a corresponding detection score through a detector, classifying the detection frame into a high confidence coefficient group if the score is higher than a set high threshold value alpha, and classifying the detection frame into a low confidence coefficient group if the score is lower than alpha and higher than a set low threshold value beta;
S2, performing association matching on the tracks by using the similarity between the detection frame and the Kalman filtering estimation result, then performing matching by adopting a Hungary algorithm based on the similarity, and reserving a high-confidence detection frame which is not matched with the tracks and the tracks which are not matched with the detection frame;
S3, associating the left tracks and the low confidence detection frames, reserving tracks which are not matched with the boundary frames after the second matching, deleting the boundary frames which are not matched with the corresponding tracks after the second matching in the low confidence boundary frames, and storing the high confidence boundary frames which are not matched with the corresponding tracks as new tracks;
s4, initializing a detection frame which is not matched with the two times of matching into a new track.
5. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 4, wherein the path planning method comprises the following steps: the high threshold α is 0.8, the low threshold β is 0.1, and IoU is used as a similarity measure in S2.
6. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 1, wherein the path planning method comprises the following steps: and in the fourth step, the miniature unmanned aerial vehicle is provided with a single-line laser radar, an RGB-D camera, an ultrasonic ranging sensor, a mobile chassis, an industrial personal computer, a memory and a screen.
7. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 6, wherein the path planning method comprises the following steps: and in the fourth step, the construction of the environment grid map is performed based on the single-line laser radar and the IMU design laser inertia SLAM algorithm for real-time positioning and environment grid map construction.
8. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 1, wherein the path planning method comprises the following steps: the method for calculating the navigation target point of the micro-unmanned vehicle in the fifth step is to calculate a path combination with the shortest running distance from the initial position of the micro-unmanned vehicle to the position P (x p,yp) of the navigation target point through a shortest path algorithm.
9. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 1, wherein the path planning method comprises the following steps: the path planning algorithm in the step six comprises an intelligent searching algorithm, an artificial intelligent algorithm-based algorithm, a geometric model-based algorithm and a local obstacle avoidance algorithm.
10. The path planning method for dynamic target identification tracking of a micro unmanned vehicle according to claim 9, wherein the path planning method comprises the following steps: the geometric model-based algorithm adopts an A-algorithm.
CN202311705930.2A 2023-12-13 2023-12-13 Path planning method for dynamic target identification and tracking of miniature unmanned vehicle Pending CN117928570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311705930.2A CN117928570A (en) 2023-12-13 2023-12-13 Path planning method for dynamic target identification and tracking of miniature unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311705930.2A CN117928570A (en) 2023-12-13 2023-12-13 Path planning method for dynamic target identification and tracking of miniature unmanned vehicle

Publications (1)

Publication Number Publication Date
CN117928570A true CN117928570A (en) 2024-04-26

Family

ID=90751455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311705930.2A Pending CN117928570A (en) 2023-12-13 2023-12-13 Path planning method for dynamic target identification and tracking of miniature unmanned vehicle

Country Status (1)

Country Link
CN (1) CN117928570A (en)

Similar Documents

Publication Publication Date Title
US11798169B2 (en) Sensor data segmentation
Shin et al. Roarnet: A robust 3d object detection based on region approximation refinement
Chen et al. Milestones in autonomous driving and intelligent vehicles—Part II: Perception and planning
CN110335337B (en) An end-to-end semi-supervised generative adversarial network-based approach to visual odometry
CN111971574B (en) Deep learning based feature extraction for LIDAR localization of autonomous vehicles
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
Mahaur et al. Road object detection: a comparative study of deep learning-based algorithms
US10696300B2 (en) Vehicle tracking
Yu et al. Vehicle detection and localization on bird's eye view elevation images using convolutional neural network
CN111771135B (en) LIDAR positioning using RNN and LSTM for time smoothing in autonomous vehicles
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN111771141A (en) LIDAR positioning in autonomous vehicles using 3D CNN networks for solution inference
JP7224682B1 (en) 3D multiple object detection device and method for autonomous driving
Wang et al. A deep analysis of visual SLAM methods for highly automated and autonomous vehicles in complex urban environment
Yu et al. Accurate and robust visual localization system in large-scale appearance-changing environments
CN110780325A (en) Method and device for positioning moving object and electronic equipment
Yang et al. Enhanced visual SLAM for construction robots by efficient integration of dynamic object segmentation and scene semantics
Ponnaganti et al. Deep learning for lidar-based autonomous vehicles in smart cities
CN116385493A (en) Multi-moving-object detection and track prediction method in field environment
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN117928570A (en) Path planning method for dynamic target identification and tracking of miniature unmanned vehicle
CN116868239A (en) Static occupancy tracking
Mehrabi A probabilistic framework for dynamic object recognition in 3d environment with a novel continuous ground estimation method
Hatano et al. Trajectory Prediction in First-Person Video: Utilizing a Pre-Trained Bird's-Eye View Model.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination