CN113112524B - Track prediction method and device for moving object in automatic driving and computing equipment - Google Patents
Track prediction method and device for moving object in automatic driving and computing equipment Download PDFInfo
- Publication number
- CN113112524B CN113112524B CN202110429261.5A CN202110429261A CN113112524B CN 113112524 B CN113112524 B CN 113112524B CN 202110429261 A CN202110429261 A CN 202110429261A CN 113112524 B CN113112524 B CN 113112524B
- Authority
- CN
- China
- Prior art keywords
- data
- moving object
- obtaining
- current image
- image frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000001914 filtration Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000002085 persistent effect Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a track prediction method and device of a mobile object in automatic driving and a computing device. The prediction method comprises the following steps: obtaining a video stream of the surroundings of the autonomous vehicle; obtaining detection data of a moving object through a preset target detector for a current image frame in a video stream; acquiring state estimation data of the mobile object in the next frame according to the detection data of the mobile object and a preset Kalman filter; obtaining road related characteristic data corresponding to a current image frame; and obtaining the movement track prediction data of the moving object in the preset time period after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data and the pre-stored track prediction model. According to the scheme, accuracy of detecting the motion trail of the moving object is improved, and running safety of the automatic driving vehicle is improved.
Description
Technical Field
The present disclosure relates to the field of autopilot technologies, and in particular, to a method and an apparatus for predicting a trajectory of a moving object during autopilot, and a computing device.
Background
At present, the research on a mobile object in the field of automatic driving is mostly focused on the aspects of detection and identification and target tracking of the mobile object, and more focuses on the current position of the mobile object, but does not predict the future position of the mobile object. When the automatic driving automobile detects that the moving object passes in front, the automobile can choose to stop and wait for the moving object to pass through the safety envelope line of the moving object and then continue to run.
In the related art, the main problem of intelligent decision behavior of an automatic driving automobile when facing a moving object is that the moving object is used as a general obstacle to predict, so that the track prediction accuracy of the moving object in an actual scene is low, and the actual requirement cannot be met.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a track prediction method, a track prediction device and a calculation device for a mobile object in automatic driving, which can improve the accuracy of the track prediction of the mobile object, and further improve the driving safety of an automatic driving vehicle.
The first aspect of the present application provides a track prediction method for a moving object in automatic driving, including:
s11: acquiring video streams of the surrounding environment of the automatic driving vehicle;
s12: obtaining detection data of a moving object through a preset target detector for a current image frame in the video stream;
s13: acquiring state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset Kalman filter;
s14: obtaining road related characteristic data corresponding to the current image frame;
s15: and obtaining movement track prediction data of the moving object in a preset time period after the current image frame according to the state estimation data, the road related characteristic data and a pre-stored track prediction model.
According to one embodiment of the present application, according to the moving object detection data and a preset kalman filter, obtaining state estimation data of the moving object in a next frame specifically includes: obtaining Kalman filtering state estimation data of the moving object in a previous frame; obtaining state observation data of the moving object in the current image frame; and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
In some embodiments, obtaining the road related feature data corresponding to the current image frame specifically includes: acquiring position data and course angle data of the automatic driving vehicle at the same time as the current image frame; and obtaining road related characteristic data corresponding to the current image frame according to the position data, the course angle data, the pre-stored high-precision map data and the current image frame.
In some embodiments, for a current image frame in the video stream, obtaining detection data of the moving object through a preset target detector specifically includes: obtaining a plurality of detection data corresponding to a plurality of moving objects through a preset target detector for a current image frame in the video stream; obtaining state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset Kalman filter, wherein the method specifically comprises the following steps: obtaining a plurality of detection data of a plurality of moving objects in a current image frame and matching data of a plurality of detection data in a previous frame of the video stream through a preset multi-target tracking algorithm for the current image frame and the previous frame; and, for each moving object, in accordance with the matching data: obtaining Kalman filtering state estimation data of the moving object in a previous frame; acquiring state observation data of the moving object in a current image frame; and obtaining state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
In some embodiments, after acquiring a video stream of the surroundings of the autonomous vehicle, acquiring a plurality of image frames from the video stream, and respectively performing S12 to S15 with each image frame as a current image frame to acquire a plurality of pieces of trajectory prediction data of the same moving object; and obtaining a piece of track prediction data according to the plurality of pieces of track prediction data.
In some embodiments, the moving object detection performed by a preset target detector specifically includes: mobile object detection is performed through the YOLOv5 detection network.
In some embodiments, the state estimation data includes at least part of direction data, speed data, and position data of the moving object; and/or the road related characteristic data comprises at least part of road line data, marking data and crossing data around the position of the mobile object.
A second aspect of the present application provides a trajectory prediction apparatus of a moving object in automatic driving, including: the first acquisition module is used for acquiring video streams of the surrounding environment of the automatic driving vehicle; the target detection module is used for obtaining detection data of a moving object through a preset target detector for the current image frame in the video stream; the state estimation module is used for obtaining state estimation data of the mobile object in the next frame according to the detection data of the mobile object and a preset Kalman filter; the second acquisition module is used for acquiring road related characteristic data corresponding to the current image frame; and the track prediction module is used for obtaining the moving track prediction data of the moving object in the preset time period after the current image frame according to the state estimation data, the road related characteristic data and the pre-stored track prediction model.
A third aspect of the present application provides a computing device comprising:
a processor; and
a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described above.
According to the method for predicting the track of the moving object in the automatic driving, according to the detection data of the moving object in the current image frame and the preset Kalman filter, state estimation data of the moving object in the next frame are obtained; and obtaining the movement track prediction data of the moving object in the preset time period after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data corresponding to the current image frame and the pre-stored track prediction model. Because more accurate state estimation data of the moving object in the next frame can be obtained through the Kalman filter, and on the basis, the road related characteristic data is utilized to predict the moving track of the moving object, so that the track prediction of the moving object in the embodiment of the application can be more accurate, and the safety of an automatic driving vehicle can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a flow chart of a method for predicting trajectories of moving objects in automatic driving according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for predicting trajectories of moving objects in autopilot in accordance with another embodiment of the present application;
FIG. 3 is a block diagram schematically illustrating a configuration of a trajectory prediction device according to an embodiment of the present application;
FIG. 4 is a block diagram of a computing device according to an embodiment of the present application.
Reference numerals illustrate:
100. track prediction means; 310. a first acquisition module; 320. a target detection module; 330. a state estimation module; 340. a second acquisition module; 350. a track prediction model;
400. a computing device; a memory 410; a processor, 420.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the related art, the main problem of intelligent decision behavior of an automatic driving automobile when facing a moving object is that the moving object is used as a general obstacle to predict, so that the track prediction accuracy of the moving object in an actual scene is low, and the actual requirement cannot be met.
In view of the above problems, the embodiments of the present application provide a method and an apparatus for predicting a trajectory of a moving object during automatic driving, which can improve the driving safety of an automatic driving vehicle by improving the accuracy of predicting the moving trajectory of the moving object.
The following describes the technical scheme of the embodiments of the present application in detail with reference to the accompanying drawings. It will be appreciated that aspects of the present application may be performed by a computing device in an autonomous vehicle, but are not limited thereto, as may be performed by a cloud server, for example.
Fig. 1 is a flowchart of a method for predicting a trajectory of a moving object in automatic driving according to an embodiment of the present application. Referring to fig. 1, the track prediction method of the present embodiment includes:
s11: a video stream of the surroundings of the autonomous vehicle is acquired.
One or more acquisition devices may be provided on the autonomous vehicle to acquire a video stream of the surrounding environment, particularly in front of the vehicle, during the travel of the vehicle. The acquisition device transmits the acquired video stream to a computing device of the autonomous vehicle or to a cloud server through a mobile network.
S12: and obtaining detection data of the moving object through a preset target detector for the current image frame in the video stream.
In some embodiments, a video stream of the vehicle surroundings is acquired by an onboard camera of the autonomous vehicle. The vehicle-mounted camera may generate distortion when acquiring images, so that the images are distorted, and therefore, the distortion correction can be performed on the current image frame before the detection of the moving object is performed, so that the image subjected to the distortion correction is ensured to be closest to reality. Thus, the accuracy of moving object track prediction according to the image acquired by the camera can be improved.
The trajectory prediction of the moving object is performed by first detecting the moving object in the image. In the present application, the moving object is described by way of example, but it is to be understood that the moving object may be, for example, another vehicle or the like, not only in this application.
In some embodiments, mobile object detection may be performed, for example, through a YOLO (You Only Look Once) v5 detection network. The YOLOv5 detection network can quickly and accurately identify target moving objects in the environment image. It will be appreciated that detection of moving objects may also be performed by other series of YOLO, region-based full convolutional network (R-CNN), SSD (Single Shot MultiBox Detector), etc. target detection methods.
The detection data of the moving object output by the target detector may be position data of a bounding box representing the moving object, for example, the center coordinates and the scale size of the bounding box, or the vertex coordinates of the bounding box.
It can be appreciated that if there are multiple moving objects in the current image frame, respective detection data for each moving object is obtained.
S13: and obtaining state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter.
In the embodiment of the application, a kalman filter may be used to predict the state of the moving object at the next moment. Kalman filtering can predict the trend of the system next in a dynamic system containing uncertain information. In the embodiment of the application, the moving object has a difference between the actual moving process and the moving process in an ideal state, and errors can be included in calculation when the Kalman filter is used for calculation, so that the state estimation data of the moving object in the next frame can be accurately obtained.
In one implementation, obtaining the state estimation data of the mobile object in the next frame according to the detection data of the mobile object and the preset kalman filter may include:
the method comprises the steps of obtaining Kalman filtering state estimation data of a mobile object in a previous frame and state observation data of the mobile object in a current image frame, and obtaining state estimation data of the mobile object in a next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame. The state estimation data may include, among other things, velocity estimation data and pose (position and/or direction) estimation data of the moving object. The state observation data may include velocity observation data and pose observation data of the moving object, which may be obtained by known methods, for example, by combining computer vision and image processing technologies with satellite positioning data of an autonomous vehicle and/or measurement data of an inertial measurement unit, which are not described in detail in the present application.
S14: and acquiring road related characteristic data corresponding to the current image frame.
In the present application, the movement track prediction of the moving object is performed in combination with the road-related feature data around the moving object. The road related characteristic data around the moving object may be identified and extracted from pre-stored high-definition map data and current image frames.
In one implementation, the obtaining the road related feature data corresponding to the current image frame specifically includes: acquiring position data and course angle data of an automatic driving vehicle at the same moment as the current image frame, and acquiring road related map data corresponding to the current image frame according to the position data and the course angle data of the automatic driving vehicle and pre-stored high-precision map data; and identifying and acquiring real-time road characteristic data from the current image frame.
The position data and heading angle data of the autonomous vehicle may be obtained, for example, by satellite positioning data of the autonomous vehicle and/or measurement data of an inertial measurement unit.
The road-related feature data includes road-related map data, real-time road feature data (e.g., real-time traffic signal indications, pose, speed of other moving objects), etc. The road-related map data includes, for example, at least part of road line data around the position where the moving object is located, road line data including, for example, road line data of a motor vehicle lane, a bicycle lane, a sidewalk, and the like, and road marking data including, for example, a static traffic sign, a building, and the like, and road junction data including, for example, an intersection, a crosswalk, and the like.
S15: and inputting the state estimation data of the moving object in the next frame and the road related characteristic data into a pre-stored track prediction model to obtain the moving track prediction data of the moving object in the preset time period after the current image frame.
The track prediction model can be stored in advance, the state estimation data of the moving object in the next frame and the road related characteristic data are used as input data and are input into the track prediction model, the pre-stored track prediction model is used for carrying out model prediction based on the input data, and the moving track prediction data of the moving object in the preset time period after the current image frame are obtained.
It will be appreciated that the pre-set trajectory prediction model may be obtained from a network or server. Furthermore, a track prediction model can be constructed by using a deep learning algorithm, the constructed track prediction model is trained by using a training data set obtained in advance, and the trained track prediction model is finally obtained by continuously iterating until convergence.
In some embodiments, after acquiring a video stream of the surroundings of the autonomous vehicle, a plurality of image frames are acquired from the video stream, S12 to S15 are respectively performed with each image frame as a current image frame, a plurality of pieces of trajectory prediction data of the same moving object are acquired, and one piece of trajectory prediction data is acquired according to the plurality of pieces of trajectory prediction data. Specifically, for example, one of the tracks may be selected from a plurality of tracks according to a preset rule, or the tracks may be fused into one track according to a preset method.
According to the track prediction method of the moving object in automatic driving, according to the detection data of the moving object in the current image frame and the preset Kalman filter, state estimation data of the moving object in the next frame are obtained; and obtaining the movement track prediction data of the moving object in the preset time period after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data corresponding to the current image frame and the pre-stored track prediction model. Because more accurate state estimation data of the moving object in the next frame can be obtained through the Kalman filter, and on the basis, the road related characteristic data is utilized to predict the moving track of the moving object, so that the track prediction of the moving object in the embodiment of the application can be more accurate, and the safety of an automatic driving vehicle can be improved.
Fig. 2 illustrates a trajectory prediction method of a moving object in automatic driving according to another embodiment of the present application. Referring to fig. 2, the method of this embodiment includes:
s21: a video stream of the surroundings of the autonomous vehicle is acquired.
S22: and obtaining a plurality of detection data corresponding to the plurality of moving objects through a preset target detector for the current image frame in the video stream.
The plurality of pieces of detection data corresponding to the plurality of moving objects obtained may be, for example, a plurality of sets of position data representing a plurality of bounding boxes of the plurality of moving objects. Each set of position data may be, for example, the center coordinates and the scale size of the corresponding bounding box, or may also be the vertex coordinates of the bounding box, etc.
S23: and obtaining a plurality of detection data of a plurality of moving objects in the current image frame and matching data of a plurality of detection data in the previous image frame by a preset multi-target tracking algorithm for the current image frame and the previous image frame in the video stream.
When there are multiple moving objects in the image, it is necessary to track and match the same moving object in different frame images according to the detection result of the moving object. In this embodiment, for example, but not limited to, implementation by a known hungarian algorithm, which is not described in detail in this application.
S24: and according to the matching data, for each moving object, obtaining state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter.
A kalman filter may be employed to predict the state of the moving object at the next instant.
In one implementation, obtaining the state estimation data of the mobile object in the next frame according to the detection data of the mobile object and the preset kalman filter may include: obtaining Kalman filtering state estimation data of the same mobile object in the previous frame according to the matching data; acquiring state observation data of a moving object in a current image frame; and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
S25: acquiring road related characteristic data corresponding to a current image frame;
the road related characteristic data around the moving object may be identified and extracted from pre-stored high-definition map data and current image frames.
In one implementation, the obtaining the road related feature data corresponding to the current image frame specifically includes: acquiring position data and course angle data of an automatic driving vehicle at the same moment as the current image frame, and acquiring road related map data corresponding to the current image frame according to the position data and the course angle data of the automatic driving vehicle and pre-stored high-precision map data; and identifying and acquiring real-time road characteristic data from the current image frame.
S26: and respectively inputting the state estimation data of each moving object in the next frame and the road related characteristic data into a pre-stored track prediction model to obtain the moving track prediction data of each moving object in the preset time period after the current image frame.
The track prediction model can be stored in advance, the state estimation data of the moving object in the next frame and the road related characteristic data are used as input data and are input into the track prediction model, the pre-stored track prediction model is used for carrying out model prediction based on the input data, and the moving track prediction data of the moving object in the preset time period after the current image frame are obtained.
After obtaining the plurality of pieces of movement track prediction data of the plurality of moving objects, a running decision of the automatic driving vehicle can be made according to the plurality of pieces of movement track prediction data.
Corresponding to the embodiment of the method for predicting the track of the moving object in the automatic driving, the application also provides a track predicting device.
Fig. 3 is a schematic structural diagram of a trajectory prediction device for a moving object in automatic driving according to an embodiment of the present application.
Referring to fig. 3, the track prediction apparatus 100 provided in this embodiment includes:
a first obtaining module 310, configured to obtain a video stream of an environment surrounding an autonomous vehicle;
the target detection module 320 is configured to obtain detection data of the moving object through a preset target detector for a current image frame in the video stream;
the state estimation module 330 is configured to obtain state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset kalman filter;
a second obtaining module 340, configured to obtain road related feature data corresponding to a current image frame;
the track prediction module 350 is configured to obtain movement track prediction data of the moving object within a preset duration after the current image frame according to the state estimation data of the moving object in the next frame, the road related feature data and the pre-stored track prediction model.
In some embodiments, the state estimation module 330 obtains state estimation data of the moving object in the next frame according to the moving object detection data and a preset kalman filter, and specifically includes:
obtaining Kalman filtering state estimation data of the moving object in a previous frame;
acquiring state observation data of the moving object in a current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the moving object in the previous frame and the state observation data in the current image frame.
In some embodiments, the second obtaining module 340 obtains the road related feature data corresponding to the current image frame, specifically including:
acquiring position data and course angle data of an automatic driving vehicle at the same moment as a current image frame;
and obtaining the road related characteristic data corresponding to the current image frame according to the position data, the course angle data, the pre-stored high-precision map data and the current image frame.
Fig. 4 is a schematic diagram of a computing device 400 according to an embodiment of the present application. The electronic device of the present embodiment may be, for example, a device mounted on an autonomous vehicle or a cloud server.
Referring to fig. 4, a computing device 400 includes a memory 410 and a processor 420.
The processor 420 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 410 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 420 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 410 may include any combination of computer-readable storage media including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some embodiments, memory 410 may include readable and/or writable removable storage devices such as Compact Discs (CDs), digital versatile discs (e.g., DVD-ROMs, dual layer DVD-ROMs), blu-ray discs read only, super-density discs, flash memory cards (e.g., SD cards, min SD cards, micro-SD cards, etc.), magnetic floppy disks, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 410 has stored thereon executable code that, when processed by the processor 420, can cause the processor 420 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. Those skilled in the art will also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined and pruned according to actual needs, and the modules in the apparatus of the embodiment of the present application may be combined, divided and pruned according to actual needs.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (9)
1. A trajectory prediction method of a moving object in automatic driving, comprising:
s11: acquiring video streams of the surrounding environment of the automatic driving vehicle;
s12: obtaining detection data of a moving object through a preset target detector for a current image frame in the video stream;
s13: acquiring state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset Kalman filter;
s14: acquiring road related characteristic data corresponding to the current image frame;
s15: obtaining movement track prediction data of the moving object in a preset time period after the current image frame according to the state estimation data, the road related characteristic data and a pre-stored track prediction model;
the state estimation data includes at least part of direction data, speed data, and position data of the moving object; and/or the number of the groups of groups,
the road-related characteristic data includes at least part of road line data, marking data, and intersection data around the location where the moving object is located.
2. The method according to claim 1, wherein obtaining the state estimation data of the moving object in the next frame according to the moving object detection data and a preset kalman filter specifically includes:
obtaining Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in the current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
3. The method according to claim 2, characterized in that: the obtaining of the road related characteristic data corresponding to the current image frame specifically comprises the following steps:
obtaining position data and course angle data of the automatic driving vehicle at the same time as the current image frame, and obtaining road related map data corresponding to the current image frame according to the position data, the course angle data and pre-stored high-precision map data; and
and identifying and acquiring preset real-time road characteristic data from the current image frame.
4. The method according to claim 1, characterized in that:
obtaining detection data of a moving object through a preset target detector for a current image frame in the video stream, wherein the detection data specifically comprises: obtaining a plurality of detection data corresponding to a plurality of moving objects through a preset target detector for a current image frame in the video stream;
obtaining state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset Kalman filter, wherein the method specifically comprises the following steps:
obtaining a plurality of detection data of a plurality of moving objects in a current image frame and matching data of a plurality of detection data in a previous frame of the video stream through a preset multi-target tracking algorithm for the current image frame and the previous frame; and, in addition, the processing unit,
for each moving object, according to the matching data:
obtaining Kalman filtering state estimation data of the moving object in a previous frame;
acquiring state observation data of the moving object in a current image frame;
and obtaining state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
5. The method according to claim 1, characterized in that: after acquiring a video stream of the surroundings of the autonomous vehicle, acquiring a plurality of image frames from the video stream, and respectively executing S12 to S15 with each image frame as a current image frame to acquire a plurality of pieces of track prediction data of the same moving object; and obtaining one piece of track prediction data according to the plurality of pieces of track prediction data.
6. The method according to any one of claims 1 to 5, wherein: detecting a moving object through a preset target detector, specifically including:
mobile object detection is performed through the YOLOv5 detection network.
7. A trajectory prediction apparatus for a moving object in automatic driving, comprising:
the first acquisition module is used for acquiring video streams of the surrounding environment of the automatic driving vehicle;
the target detection module is used for obtaining detection data of a moving object through a preset target detector for the current image frame in the video stream;
the state estimation module is used for obtaining state estimation data of the mobile object in the next frame according to the detection data of the mobile object and a preset Kalman filter;
the second acquisition module is used for acquiring road related characteristic data corresponding to the current image frame;
the track prediction module is used for obtaining movement track prediction data of the moving object in a preset time period after the current image frame according to the state estimation data, the road related characteristic data and a pre-stored track prediction model;
the state estimation data includes at least part of direction data, speed data, and position data of the moving object; and/or the number of the groups of groups,
the road-related characteristic data includes at least part of road line data, marking data, and intersection data around the location where the moving object is located.
8. The apparatus according to claim 7, wherein the state estimation module obtains state estimation data of the moving object in a next frame according to the moving object detection data and a preset kalman filter, and specifically includes:
obtaining Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in the current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
9. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110429261.5A CN113112524B (en) | 2021-04-21 | 2021-04-21 | Track prediction method and device for moving object in automatic driving and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110429261.5A CN113112524B (en) | 2021-04-21 | 2021-04-21 | Track prediction method and device for moving object in automatic driving and computing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113112524A CN113112524A (en) | 2021-07-13 |
CN113112524B true CN113112524B (en) | 2024-02-20 |
Family
ID=76719026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110429261.5A Active CN113112524B (en) | 2021-04-21 | 2021-04-21 | Track prediction method and device for moving object in automatic driving and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113112524B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113570595B (en) * | 2021-08-12 | 2023-06-20 | 上汽大众汽车有限公司 | Vehicle track prediction method and optimization method of vehicle track prediction model |
CN114245177B (en) * | 2021-12-17 | 2024-01-23 | 智道网联科技(北京)有限公司 | Smooth display method and device of high-precision map, electronic equipment and storage medium |
CN114495037A (en) * | 2021-12-31 | 2022-05-13 | 山东师范大学 | Video prediction method and system based on key points and Kalman filtering |
CN116883915B (en) * | 2023-09-06 | 2023-11-21 | 常州星宇车灯股份有限公司 | Target detection method and system based on front and rear frame image association |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023244A (en) * | 2016-04-13 | 2016-10-12 | 南京邮电大学 | Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model |
US9552648B1 (en) * | 2012-01-23 | 2017-01-24 | Hrl Laboratories, Llc | Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering |
CN111091591A (en) * | 2019-12-23 | 2020-05-01 | 百度国际科技(深圳)有限公司 | Collision detection method and device, electronic equipment and storage medium |
CN111292352A (en) * | 2020-01-20 | 2020-06-16 | 杭州电子科技大学 | Multi-target tracking method, device, equipment and storage medium |
CN111340855A (en) * | 2020-03-06 | 2020-06-26 | 电子科技大学 | Road moving target detection method based on track prediction |
CN111402293A (en) * | 2020-03-10 | 2020-07-10 | 北京邮电大学 | Vehicle tracking method and device for intelligent traffic |
CN111476817A (en) * | 2020-02-27 | 2020-07-31 | 浙江工业大学 | Multi-target pedestrian detection tracking method based on yolov3 |
WO2020164089A1 (en) * | 2019-02-15 | 2020-08-20 | Bayerische Motoren Werke Aktiengesellschaft | Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization |
CN111666891A (en) * | 2020-06-08 | 2020-09-15 | 北京百度网讯科技有限公司 | Method and apparatus for estimating obstacle motion state |
CN111693972A (en) * | 2020-05-29 | 2020-09-22 | 东南大学 | Vehicle position and speed estimation method based on binocular sequence images |
CN112118537A (en) * | 2020-11-19 | 2020-12-22 | 蘑菇车联信息科技有限公司 | Method and related device for estimating movement track by using picture |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11112796B2 (en) * | 2017-08-08 | 2021-09-07 | Uatc, Llc | Object motion prediction and autonomous vehicle control |
US11630197B2 (en) * | 2019-01-04 | 2023-04-18 | Qualcomm Incorporated | Determining a motion state of a target object |
-
2021
- 2021-04-21 CN CN202110429261.5A patent/CN113112524B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9552648B1 (en) * | 2012-01-23 | 2017-01-24 | Hrl Laboratories, Llc | Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering |
CN106023244A (en) * | 2016-04-13 | 2016-10-12 | 南京邮电大学 | Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model |
WO2020164089A1 (en) * | 2019-02-15 | 2020-08-20 | Bayerische Motoren Werke Aktiengesellschaft | Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization |
CN111091591A (en) * | 2019-12-23 | 2020-05-01 | 百度国际科技(深圳)有限公司 | Collision detection method and device, electronic equipment and storage medium |
CN111292352A (en) * | 2020-01-20 | 2020-06-16 | 杭州电子科技大学 | Multi-target tracking method, device, equipment and storage medium |
CN111476817A (en) * | 2020-02-27 | 2020-07-31 | 浙江工业大学 | Multi-target pedestrian detection tracking method based on yolov3 |
CN111340855A (en) * | 2020-03-06 | 2020-06-26 | 电子科技大学 | Road moving target detection method based on track prediction |
CN111402293A (en) * | 2020-03-10 | 2020-07-10 | 北京邮电大学 | Vehicle tracking method and device for intelligent traffic |
CN111693972A (en) * | 2020-05-29 | 2020-09-22 | 东南大学 | Vehicle position and speed estimation method based on binocular sequence images |
CN111666891A (en) * | 2020-06-08 | 2020-09-15 | 北京百度网讯科技有限公司 | Method and apparatus for estimating obstacle motion state |
CN112118537A (en) * | 2020-11-19 | 2020-12-22 | 蘑菇车联信息科技有限公司 | Method and related device for estimating movement track by using picture |
Also Published As
Publication number | Publication date |
---|---|
CN113112524A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113112524B (en) | Track prediction method and device for moving object in automatic driving and computing equipment | |
CN112292711B (en) | Associating LIDAR data and image data | |
Naphade et al. | The 2018 nvidia ai city challenge | |
CN112069643B (en) | Automatic driving simulation scene generation method and device | |
US9495602B2 (en) | Image and map-based detection of vehicles at intersections | |
US11436815B2 (en) | Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same | |
AU2008201108A1 (en) | Vision based navigation and guidance system | |
US11403947B2 (en) | Systems and methods for identifying available parking spaces using connected vehicles | |
Seo et al. | Tracking and estimation of ego-vehicle's state for lateral localization | |
CN111091037A (en) | Method and device for determining driving information | |
US20220398825A1 (en) | Method for generating 3d reference points in a map of a scene | |
CN115235452B (en) | Intelligent parking positioning system and method based on UWB/IMU and visual information fusion | |
CN114663852A (en) | Method and device for constructing lane line graph, electronic equipment and readable storage medium | |
CN114754778B (en) | Vehicle positioning method and device, electronic equipment and storage medium | |
CN114463984A (en) | Vehicle track display method and related equipment | |
CN110764526B (en) | Unmanned aerial vehicle flight control method and device | |
CN115355919A (en) | Precision detection method and device of vehicle positioning algorithm, computing equipment and medium | |
Chipka et al. | Autonomous urban localization and navigation with limited information | |
EP4160154A1 (en) | Methods and systems for estimating lanes for a vehicle | |
EP4160153A1 (en) | Methods and systems for estimating lanes for a vehicle | |
US12147232B2 (en) | Method, system and computer program product for the automated locating of a vehicle | |
CN116433771A (en) | Visual SLAM positioning method and device, electronic equipment and storable medium | |
US20230024799A1 (en) | Method, system and computer program product for the automated locating of a vehicle | |
CN117710926A (en) | High-precision map lane line processing method and device, electronic equipment and storage medium | |
CN117994762A (en) | Detection method and device for barrier gate, vehicle and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |