CN111967498A - Night target detection and tracking method based on millimeter wave radar and vision fusion - Google Patents
Night target detection and tracking method based on millimeter wave radar and vision fusion Download PDFInfo
- Publication number
- CN111967498A CN111967498A CN202010699523.5A CN202010699523A CN111967498A CN 111967498 A CN111967498 A CN 111967498A CN 202010699523 A CN202010699523 A CN 202010699523A CN 111967498 A CN111967498 A CN 111967498A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- millimeter wave
- wave radar
- effective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
- G01S13/72—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
- G01S13/723—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a night target detection and tracking method based on millimeter wave radar and vision fusion, which comprises the following steps: respectively acquiring original data and an original image through a millimeter wave radar and a camera; processing the original data to obtain a first target track of an effective target; processing the original image to obtain a second target track of the effective target; and matching the first target track with a second target track. According to the invention, a camera original image is adopted to obtain richer image dark part information, the image dark part details are restored by using the image brightening algorithm for deep learning, the night vision capability of the unmanned vehicle is enhanced, the target is accurately and quickly detected by using the YOLO v4 target detection algorithm, the requirement of unmanned driving real-time property is met, the fault tolerance of the proposed track fusion method is good, and when one sensor fails, the sensing system can still normally work.
Description
Technical Field
The invention relates to the field of unmanned environment perception, in particular to a night target detection and tracking method based on millimeter wave radar and vision fusion.
Background
The environment perception is in a key position of information interaction between an intelligent driving vehicle and the external environment, most of information received by human beings in the driving process comes from vision, such as traffic signs, pedestrians, lane lines and the like, and the road vision information becomes a main basis for controlling the vehicle by a driver. In the intelligent driving vehicle, a camera is used for replacing a human vision system, and meanwhile, the position and the speed of a front target are provided by combining a millimeter wave radar, so that the front driving environment can be accurately sensed. In recent years, computer vision technology has become mature, and technologies such as vision-based object detection, image classification, and instance segmentation have been rapidly developed. However, the target detection framework is concentrated on images with good light, and in real life, traffic accidents frequently occur at night and in scenes with low illumination, so that great loss is brought to lives and properties of people. Therefore, research on night target detection has important significance for improving unmanned driving safety and reducing traffic accidents.
Disclosure of Invention
In view of this, the present invention provides a night target detection and tracking method based on millimeter wave radar and visual fusion, which has the following advantages: 1) by adopting the track fusion method, the fault tolerance rate is high, and the system can still work normally when a single sensor fails; 2) original image data are collected, and richer image dark part information can be obtained; 3) meanwhile, various targets such as automobiles, pedestrians, motorcycles, bicycles and the like in the driving environment at night are quickly and accurately detected; 4) the image is brightened by adopting deep learning, and the detail restoration effect of a dark part is good; 5) the method has strong robustness and better feasibility.
The purpose of the invention is realized by the following technical scheme:
a night target detection and tracking method based on millimeter wave radar and vision fusion comprises the following steps:
respectively acquiring original data and an original image through a millimeter wave radar and a camera;
processing the original data to obtain a first target track of an effective target;
processing the original image to obtain a second target track of the effective target;
and matching the first target track with a second target track.
Further, the method for processing the original data specifically comprises the following steps:
removing invalid targets, static targets and redundant data in the original data;
determining the effective target through a data association mode, and initializing a flight path of the effective target, wherein the data association mode specifically comprises the following steps:
d is the relative distance of the target measured by the millimeter wave radar;
v is the relative speed of the target measured by the millimeter wave radar;
phi is a target angle measured by the millimeter wave radar;
m is the total number of targets measured by the radar in each period;
i is time, j is jth radar data of each period;
and tracking the formed flight path to obtain the first target flight path of the effective target.
Further, the original image is an image obtained by converting the captured light source signal into a digital signal according to a bayer array through a CMOS or CCD image sensor.
Further, the method for processing the original image comprises the following steps:
s1: brightening the original image;
s2: based on the visual deep learning, obtaining a bounding box, a category and a confidence coefficient of an effective target;
s3: and tracking the effective target, and acquiring the second target track of the effective target based on a DeepSORT network.
Further, the S1 specifically includes:
s11: resolving the Bayer array chart of the original image into a four-channel image according to colors;
s12: adjusting the range of the four-channel image data, and subtracting the black level to make the minimum value zero;
s13: processing the adjusted four-channel image by using a full convolution neural network;
s14: and (3) performing sub-pixel convolution on the image information output by the full convolution neural network to generate a high-resolution color image.
Further, the S2 specifically includes:
s21: collecting an original Bayer array image, and labeling all types of traffic participants in the original image after the image is brightened;
s22: training a YOLO v4 network model, and storing a weight file;
s23: inputting the original image with the highlighted image into a trained YOLO v4 network model, calculating and outputting a bounding box, a category and a confidence coefficient of the effective target through a weight file, and finally storing the bounding box and the category of the effective target.
Further, the S3 specifically includes:
s31: transmitting the information of the boundary frame of the effective target and the original image to a DeepsORT network, and performing feature extraction on a detection block in the boundary frame of the effective target by the DeepsORT network by using a CNN network;
s32: performing motion characteristic association on all the effective targets of the two adjacent frames, associating the successful effective targets, and entering S33, where the successful association of the motion characteristics is defined as: if the distance between the detection frames of the effective targets of the two adjacent frames is smaller than a threshold value, the association is successful;
s33: and performing appearance information association on the remaining effective targets of the two adjacent frames, and if the association is successful, entering S34, where success of appearance information association is defined as: the similarity of the appearance information of the effective targets of two adjacent frames is correlated, and if the distance is smaller than a specified threshold value, the correlation is successful;
s34: and performing fusion matching on the effective targets with the motion characteristics and the appearance characteristics successfully associated to obtain a final fusion result, and if the fusion result is smaller than a threshold value, defining the matching to be successful.
Further, the method for matching the first target track with the second target track specifically comprises the following steps:
projecting the central coordinate point of the first track onto an image according to the conversion relation between the millimeter wave radar coordinate system and the pixel coordinate system to obtain the central projection coordinate point of the first track;
and calculating the Euclidean distance between the central projection coordinate point of the first track and the corresponding central coordinate point of the second track, and if the Euclidean distance is smaller than a specified threshold value, defining that the matching is successful.
The invention has the beneficial effects that:
according to the invention, a camera original image is adopted to obtain richer image dark part information, the image dark part details are restored by using a deep learning image brightening algorithm, the night vision capability of the unmanned vehicle is enhanced, the target is accurately and quickly detected by using a YOLO v4 target detection algorithm, the requirement of unmanned driving real-time property is met, the provided track fusion method is good in fault tolerance, and when one sensor fails, the sensing system can still normally work.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the present invention;
FIG. 2 is a schematic diagram of a method of processing an original image;
FIG. 3 is a schematic illustration of a Bayer array;
FIG. 4 is a schematic diagram of image highlighting;
FIG. 5 is a schematic illustration of a disassembly of the Bayer array;
FIG. 6 is a schematic diagram before and after correction of black level, in which (a) is a schematic diagram before correction and (b) is a schematic diagram after correction;
FIG. 7 is a schematic diagram of a ConvNet structure;
fig. 8 is a comparison graph before and after image brightening, wherein (a) is the original image and (b) is the image after image brightening.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
The embodiment provides a night target detection and tracking method based on millimeter wave radar and vision fusion, and the method can detect various traffic participants such as pedestrians, automobiles, motorcycles, bicycles and the like in a front driving environment, as shown in fig. 1, specifically comprises the following steps:
and respectively acquiring original data and an original image through a millimeter wave radar and a camera. The original image is original data obtained by converting captured light source signals into digital signals through a CMOS or CCD image sensor according to a Bayer array, and one pixel point only records one color.
As shown in fig. 3, the bayer array, which is the main technology for CMOS or CCD to capture color images, is composed of a 4 × 4 array of 8 green, 4 blue and 4 red pixels.
And processing the original data to obtain a first target track of the effective target. The method specifically comprises the following steps:
invalid targets and static targets in the original data are removed, and redundant data are eliminated through a clustering method because the same target can reflect a plurality of radar points, so that the calculated amount is reduced.
The effective target is determined by a data association mode, and the track of the effective target is initialized, in this embodiment, association is performed for 3 times continuously, that is, the effective target is determined, and the data association mode specifically includes:
d is the relative distance of the target measured by the millimeter wave radar;
v is the relative speed of the target measured by the millimeter wave radar;
phi is a target angle measured by the millimeter wave radar;
m is the total number of targets measured by the radar in each period;
i is time, j is jth radar data of each period;
and tracking the formed flight path by using a Kalman filtering algorithm to further obtain a first target flight path.
And processing the original image to obtain a second target track of the effective target. The method specifically comprises the following steps: as shown in fig. 2.
S1: and performing image brightening on the original image.
As shown in fig. 4, a bayer array pattern (hxwx 1) of the original image is decomposed into four-channel images according to colorsThe invention takes a Bayer image 2 x 2 array as a basic unit, Bayer (1, 1) -R (1,1) bayer (1, 2) -G2(1, 1), Bayer (2, 1) -G4(1, 1), Bayer (2, 2) -B (1, 1), other Bayer units are similarly treated. As shown in fig. 5, where R is the red channel; g2 is the green channel located on the second sheet; g4 is the green channel located in the fourth sheet; (1, 1) representing coordinates representing a pixel position of a first row and a first column; b is a blue channel, and the exposure ratio is calculated, specifically:
1. calculating the exposure of the current frame
In the formula: n is the aperture value;
t is the exposure time in seconds.
2. Calculating the exposure ratio
In the formula: gamma is the exposure ratio;
EVtestis the normal exposure value tested by the experiment.
The exposure ratio is the amplification factor of image brightening, different dark scenes need different amplification factors for brightening, and underexposure and overexposure of the original image after brightening can be avoided.
As shown in fig. 6, the four-channel image data range is adjusted to subtract the black level to make the minimum value zero.
The adjusted four-channel image is processed by using a full convolution neural network, as shown in fig. 7, the full convolution neural network halves all the channels of U-Net, the last convolution layer is a convolution kernel of 1 × 1, and image information of 12 channels is output.
The image information output by the full convolution neural network is subjected to sub-pixel convolution to generate a high-resolution color image, the low-resolution image is subjected to sub-pixel convolution to generate a high-resolution image, the super-resolution of the image is realized, and the image information output by the full convolution neural network is subjected to sub-pixel convolutionThe image information is sub-pixel convolved to generate an H × W × 3 RGB color image.
The comparison of before and after brightness enhancement is shown in fig. 8.
S2: based on the visual deep learning, obtaining a bounding box, a category and a confidence coefficient of an effective target, specifically:
s21: collecting an original Bayer array image, and labeling all types of traffic participants in the original image after the image is brightened, wherein the traffic participants are automobiles, pedestrians, bicycles, motorcycles and the like.
S22: the YOLO v4 network model was trained and the weight file saved.
Specifically, a YOLO v4 network model is established, image information containing various traffic participants including automobiles, pedestrians, bicycles, motorcycles and the like on a road is collected and labeled, and the data set is randomly divided into a training data set, a verification data set and a test data set.
The YOLO v4 network model is trained based on the training dataset, the validation dataset, and the test dataset, as well as the YOLO v4 network model. Extracting image characteristic information of the labeled data set, then carrying out classification task training on the characteristic information, continuously repeating the process of updating forward propagation-error calculation-backward propagation-weight by dynamically calculating the errors of the YOLO v4 network model on the training set and the testing set until the error value reaches an expected value, and storing the model and the weight file.
S23: inputting the original image with the highlighted image into a trained YOLO v4 network model, calculating and outputting a bounding box, a category and a confidence coefficient of an effective target through a weight file, and finally storing the bounding box and the category of the effective target.
S3: and tracking the effective target, and acquiring a second target track of the effective target based on a DeepSORT network. The method specifically comprises the following steps:
s31: and transmitting the information of the boundary frame of the effective target and the original image to a DeepsORT network, and performing feature extraction on the detection block in the boundary frame of the effective target by the DeepsORT network by utilizing a CNN network.
S32: and performing motion characteristic association on all effective targets of two adjacent frames, and associating the successful effective targets, and entering S33, where the successful association of the motion characteristics is defined as: and if the distance between the detection frames of the effective targets of the two adjacent frames is smaller than the threshold value, the association is successful. The distance between the detection frames of the effective targets of two adjacent frames can be expressed by a formula as follows:
in the formula: djIndicating the position of the jth detection frame;
yirepresenting the predicted position of the ith tracker on the target;
Sirepresenting a covariance matrix between the detected position and the average tracking position.
S33: and performing appearance information association on the remaining valid targets of the two adjacent frames, and if the association is successful, entering S34, where the success of the appearance information association is defined as: the similarity of the appearance information of the effective targets of two adjacent frames is correlated, and if the distance is less than a specified threshold value, the correlation is successful.
The similarity of the appearance information of the valid targets of two adjacent frames can be expressed by the following formula:
in the formula:a set of feature vectors of the last 100 successful associations for each tracking target successful association;
rjthe feature vector of the ith detection block in the current image is used.
S34: and performing fusion matching on the effective targets with the motion characteristics and the appearance characteristics successfully associated to obtain a final fusion result, and if the fusion result is smaller than a threshold value, defining the matching to be successful.
The fusion matching is formulated as follows:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j)
and matching the first target track with a second target track.
Projecting the central coordinate point of the first track onto the image according to the conversion relation between the millimeter wave radar coordinate system and the pixel coordinate system to obtain the central projection coordinate point of the first track;
and calculating the Euclidean distance between the central projection coordinate point of the first track and the corresponding central coordinate point of the second track, and if the Euclidean distance is smaller than a specified threshold value, defining that the matching is successful.
The euclidean distance is expressed as follows:
in the formula: d is a distance threshold, if the distance is less than a specified threshold, the radar track and the visual track are the same target;
(uC,vC) Detecting and tracking a target track central point by an image;
(uR,vR) The radar projects a waypoint on the image.
Calibrating a camera, establishing a relation between an image pixel coordinate system and a world coordinate system, establishing a conversion relation between a radar coordinate system and the world coordinate system according to the installation position of the radar, and establishing a conversion relation from the radar coordinate to the image pixel coordinate.
Wherein, the conversion relationship between the image pixel coordinate system and the world coordinate system can be expressed as:
in the formula, RCA lens rotation matrix;
TCa lens translation matrix;
f is the focal length of the lens;
uv is a pixel coordinate system;
XwYwZwis a world coordinate system;
(u0,v0) The coordinates of the center point of the image plane in a pixel coordinate system;
dx,dyrepresenting the actual size of the pixel on the photosensitive chip;
the transformation relationship between the radar coordinate system and the world coordinate system can be expressed as:
in the formula, XRYRZRA millimeter wave radar coordinate system;
RRa millimeter wave radar rotation matrix;
TRa millimeter wave radar translation matrix;
the conversion relationship of radar coordinates to image pixel coordinates can be expressed as:
finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (8)
1. A night target detection and tracking method based on millimeter wave radar and vision fusion is characterized in that: the tracking method comprises the following steps:
respectively acquiring original data and an original image through a millimeter wave radar and a camera;
processing the original data to obtain a first target track of an effective target;
processing the original image to obtain a second target track of the effective target;
and matching the first target track with a second target track.
2. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 1, characterized in that: the method for processing the original data specifically comprises the following steps:
removing invalid targets, static targets and redundant data in the original data;
determining the effective target through a data association mode, and initializing a flight path of the effective target, wherein the data association mode specifically comprises the following steps:
d is the relative distance of the target measured by the millimeter wave radar;
v is the relative speed of the target measured by the millimeter wave radar;
phi is a target angle measured by the millimeter wave radar;
m is the total number of targets measured by the radar in each period;
i is time, j is jth radar data of each period;
and tracking the formed flight path to obtain the first target flight path of the effective target.
3. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 1, characterized in that: the original image is data obtained by converting captured light source signals into digital signals through a CMOS or CCD image sensor according to a Bayer array.
4. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 3, wherein: the method for processing the original image comprises the following steps:
s1: brightening the original image;
s2: based on the visual deep learning, obtaining a bounding box, a category and a confidence coefficient of an effective target;
s3: and tracking the effective target, and acquiring the second target track of the effective target based on a DeepSORT network.
5. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 4, wherein: the S1 specifically includes:
s11: resolving the Bayer array chart of the original image into a four-channel image according to colors;
s12: adjusting the range of the four-channel image data, and subtracting the black level to make the minimum value zero;
s13: processing the adjusted four-channel image by using a full convolution neural network;
s14: and (3) performing sub-pixel convolution on the image information output by the full convolution neural network to generate a high-resolution color image.
6. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 5, wherein: the S2 specifically includes:
s21: collecting an original Bayer array image, and labeling all types of traffic participants in the original image after the image is brightened;
s22: training a YOLO v4 network model, and storing a weight file;
s23: inputting the original image with the highlighted image into a trained YOLO v4 network model, calculating and outputting the bounding box, the category and the confidence coefficient of the effective target through a weight file, and finally storing the bounding box and the category of the effective target.
7. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 6, wherein: the S3 specifically includes:
s31: transmitting the information of the boundary frame of the effective target and the original image to a DeepsORT network, and performing feature extraction on a detection block in the boundary frame of the effective target by the DeepsORT network by using a CNN network;
s32: performing motion characteristic association on all the effective targets of the two adjacent frames, associating the successful effective targets, and entering S33, where the successful association of the motion characteristics is defined as: if the distance between the detection frames of the effective targets of the two adjacent frames is smaller than a threshold value, the association is successful;
s33: and performing appearance information association on the remaining effective targets of the two adjacent frames, and if the association is successful, entering S34, where success of appearance information association is defined as: the similarity of the appearance information of the effective targets of two adjacent frames is correlated, and if the distance is smaller than a specified threshold value, the correlation is successful;
s34: and performing fusion matching on the effective targets with the motion characteristics and the appearance characteristics successfully associated to obtain a final fusion result, and if the fusion result is smaller than a threshold value, defining the matching to be successful.
8. The night target detection and tracking method based on millimeter wave radar and vision fusion of claim 7, wherein: the method for matching the first target track with the second target track specifically comprises the following steps:
projecting the central coordinate point of the first track onto an image according to the conversion relation between the millimeter wave radar coordinate system and the pixel coordinate system to obtain the central projection coordinate point of the first track;
and calculating the Euclidean distance between the central projection coordinate point of the first track and the corresponding central coordinate point of the second track, and if the Euclidean distance is smaller than a specified threshold value, defining that the matching is successful.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010699523.5A CN111967498A (en) | 2020-07-20 | 2020-07-20 | Night target detection and tracking method based on millimeter wave radar and vision fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010699523.5A CN111967498A (en) | 2020-07-20 | 2020-07-20 | Night target detection and tracking method based on millimeter wave radar and vision fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111967498A true CN111967498A (en) | 2020-11-20 |
Family
ID=73361702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010699523.5A Pending CN111967498A (en) | 2020-07-20 | 2020-07-20 | Night target detection and tracking method based on millimeter wave radar and vision fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111967498A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528763A (en) * | 2020-11-24 | 2021-03-19 | 浙江大华汽车技术有限公司 | Target detection method, electronic device and computer storage medium |
CN112528817A (en) * | 2020-12-04 | 2021-03-19 | 重庆大学 | Patrol robot visual detection and tracking method based on neural network |
CN112816975A (en) * | 2020-12-31 | 2021-05-18 | 湖北亿咖通科技有限公司 | Flight path determining method and device and electronic equipment |
CN113009434A (en) * | 2021-02-20 | 2021-06-22 | 纳瓦电子(上海)有限公司 | Millimeter wave radar failure auxiliary device and method thereof |
CN113139442A (en) * | 2021-04-07 | 2021-07-20 | 青岛以萨数据技术有限公司 | Image tracking method and device, storage medium and electronic equipment |
CN113325415A (en) * | 2021-04-20 | 2021-08-31 | 武汉光庭信息技术股份有限公司 | Fusion method and system for vehicle radar data and camera data |
CN113343849A (en) * | 2021-06-07 | 2021-09-03 | 西安恒盛安信智能技术有限公司 | Fusion sensing equipment based on radar and video |
CN113848545A (en) * | 2021-09-01 | 2021-12-28 | 电子科技大学 | Fusion target detection and tracking method based on vision and millimeter wave radar |
CN114783211A (en) * | 2022-03-22 | 2022-07-22 | 南京莱斯信息技术股份有限公司 | Scene target monitoring enhancement system and method based on video data fusion |
CN114913419A (en) * | 2022-05-10 | 2022-08-16 | 西南石油大学 | Intelligent parking target detection method and system |
CN114972976A (en) * | 2022-07-29 | 2022-08-30 | 之江实验室 | Night target detection and training method and device based on frequency domain self-attention mechanism |
CN115657012A (en) * | 2022-12-23 | 2023-01-31 | 深圳佑驾创新科技有限公司 | Matching method, device and equipment of image target and radar target and storage medium |
CN117075112A (en) * | 2023-08-25 | 2023-11-17 | 中国人民解放军国防科技大学 | Unmanned ship radar photoelectric fusion method for azimuth track matching |
WO2024093749A1 (en) * | 2022-10-31 | 2024-05-10 | 上海无线电设备研究所 | Radar and visual track prediction and correction method |
EP4414746A1 (en) | 2023-02-08 | 2024-08-14 | Continental Autonomous Mobility Germany GmbH | Multi-object detection and tracking |
CN118644828A (en) * | 2024-08-15 | 2024-09-13 | 上海几何伙伴智能驾驶有限公司 | Method for realizing track association based on 4D millimeter wave radar and vision fusion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459750A (en) * | 2018-10-19 | 2019-03-12 | 吉林大学 | A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision |
CN111145213A (en) * | 2019-12-10 | 2020-05-12 | 中国银联股份有限公司 | Target tracking method, device and system and computer readable storage medium |
-
2020
- 2020-07-20 CN CN202010699523.5A patent/CN111967498A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459750A (en) * | 2018-10-19 | 2019-03-12 | 吉林大学 | A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision |
CN111145213A (en) * | 2019-12-10 | 2020-05-12 | 中国银联股份有限公司 | Target tracking method, device and system and computer readable storage medium |
Non-Patent Citations (7)
Title |
---|
ALEXEY BOCHKOVSKIY ET AL.: "YOLOv4: Optimal Speed and Accuracy of Object Detection", 《ARXIV》 * |
CHEN CHEN ET AL.: "Learning to See in the Dark", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
乔鹏: "基于深度学习和边缘任务卸载的交通流量检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
向重洋: "高速服务区多目标检测与跟踪算法研究与实现", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
秦瑜: "基于卷积神经网络的低照度图像去噪与增强研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
郭云雷: "基于机器视觉和毫米波雷达的夜间车辆识别", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
陈慧岩等: "《智能车辆理论与应用》", 31 July 2018, 北京理工大学出版社 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528763A (en) * | 2020-11-24 | 2021-03-19 | 浙江大华汽车技术有限公司 | Target detection method, electronic device and computer storage medium |
CN112528817A (en) * | 2020-12-04 | 2021-03-19 | 重庆大学 | Patrol robot visual detection and tracking method based on neural network |
CN112528817B (en) * | 2020-12-04 | 2024-03-19 | 重庆大学 | Inspection robot vision detection and tracking method based on neural network |
CN112816975A (en) * | 2020-12-31 | 2021-05-18 | 湖北亿咖通科技有限公司 | Flight path determining method and device and electronic equipment |
CN112816975B (en) * | 2020-12-31 | 2024-03-15 | 亿咖通(湖北)技术有限公司 | Track determining method and device and electronic equipment |
CN113009434A (en) * | 2021-02-20 | 2021-06-22 | 纳瓦电子(上海)有限公司 | Millimeter wave radar failure auxiliary device and method thereof |
CN113139442A (en) * | 2021-04-07 | 2021-07-20 | 青岛以萨数据技术有限公司 | Image tracking method and device, storage medium and electronic equipment |
CN113325415B (en) * | 2021-04-20 | 2023-10-13 | 武汉光庭信息技术股份有限公司 | Fusion method and system of vehicle radar data and camera data |
CN113325415A (en) * | 2021-04-20 | 2021-08-31 | 武汉光庭信息技术股份有限公司 | Fusion method and system for vehicle radar data and camera data |
CN113343849A (en) * | 2021-06-07 | 2021-09-03 | 西安恒盛安信智能技术有限公司 | Fusion sensing equipment based on radar and video |
CN113848545B (en) * | 2021-09-01 | 2023-04-14 | 电子科技大学 | Fusion target detection and tracking method based on vision and millimeter wave radar |
CN113848545A (en) * | 2021-09-01 | 2021-12-28 | 电子科技大学 | Fusion target detection and tracking method based on vision and millimeter wave radar |
CN114783211B (en) * | 2022-03-22 | 2023-09-15 | 南京莱斯信息技术股份有限公司 | Scene target monitoring enhancement system and method based on video data fusion |
CN114783211A (en) * | 2022-03-22 | 2022-07-22 | 南京莱斯信息技术股份有限公司 | Scene target monitoring enhancement system and method based on video data fusion |
CN114913419A (en) * | 2022-05-10 | 2022-08-16 | 西南石油大学 | Intelligent parking target detection method and system |
CN114972976B (en) * | 2022-07-29 | 2022-12-20 | 之江实验室 | Night target detection and training method and device based on frequency domain self-attention mechanism |
CN114972976A (en) * | 2022-07-29 | 2022-08-30 | 之江实验室 | Night target detection and training method and device based on frequency domain self-attention mechanism |
WO2024093749A1 (en) * | 2022-10-31 | 2024-05-10 | 上海无线电设备研究所 | Radar and visual track prediction and correction method |
CN115657012A (en) * | 2022-12-23 | 2023-01-31 | 深圳佑驾创新科技有限公司 | Matching method, device and equipment of image target and radar target and storage medium |
EP4414746A1 (en) | 2023-02-08 | 2024-08-14 | Continental Autonomous Mobility Germany GmbH | Multi-object detection and tracking |
CN117075112A (en) * | 2023-08-25 | 2023-11-17 | 中国人民解放军国防科技大学 | Unmanned ship radar photoelectric fusion method for azimuth track matching |
CN118644828A (en) * | 2024-08-15 | 2024-09-13 | 上海几何伙伴智能驾驶有限公司 | Method for realizing track association based on 4D millimeter wave radar and vision fusion |
CN118644828B (en) * | 2024-08-15 | 2024-10-29 | 上海几何伙伴智能驾驶有限公司 | Method for realizing track association based on 4D millimeter wave radar and vision fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111967498A (en) | Night target detection and tracking method based on millimeter wave radar and vision fusion | |
CN111965636A (en) | Night target detection method based on millimeter wave radar and vision fusion | |
CN107527044B (en) | Method and device for clearing multiple license plates based on search | |
CN111462128B (en) | Pixel-level image segmentation system and method based on multi-mode spectrum image | |
CN113255659B (en) | License plate correction detection and identification method based on MSAFF-yolk 3 | |
CN111144207B (en) | Human body detection and tracking method based on multi-mode information perception | |
CN112365462B (en) | Image-based change detection method | |
CN111582074A (en) | Monitoring video leaf occlusion detection method based on scene depth information perception | |
CN113052170B (en) | Small target license plate recognition method under unconstrained scene | |
CN110751206A (en) | Multi-target intelligent imaging and identifying device and method | |
CN107563299A (en) | A kind of pedestrian detection method using ReCNN integrating context informations | |
CN111444916A (en) | License plate positioning and identifying method and system under unconstrained condition | |
CN114898353B (en) | License plate recognition method based on video sequence image characteristics and information | |
CN112766273A (en) | License plate recognition method | |
CN115375991A (en) | Strong/weak illumination and fog environment self-adaptive target detection method | |
CN111881924B (en) | Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement | |
CN111583341B (en) | Cloud deck camera shift detection method | |
CN112528994A (en) | Free-angle license plate detection method, license plate identification method and identification system | |
CN113011408A (en) | Method and system for recognizing characters and vehicle identification codes of multi-frame picture sequence | |
CN115147450B (en) | Moving target detection method and detection device based on motion frame difference image | |
CN111986233A (en) | Large-scene minimum target remote sensing video tracking method based on feature self-learning | |
CN110956153B (en) | Traffic signal lamp detection method and system for unmanned vehicle | |
CN114882469A (en) | Traffic sign detection method and system based on DL-SSD model | |
CN114943738A (en) | Sensor packaging curing adhesive defect identification method based on visual identification | |
CN116258740A (en) | Vehicle-mounted forward-looking multi-target tracking method based on multi-camera pixel fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201120 |