[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114037972B - Target detection method, device, equipment and readable storage medium - Google Patents

Target detection method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN114037972B
CN114037972B CN202111172494.8A CN202111172494A CN114037972B CN 114037972 B CN114037972 B CN 114037972B CN 202111172494 A CN202111172494 A CN 202111172494A CN 114037972 B CN114037972 B CN 114037972B
Authority
CN
China
Prior art keywords
target
obstacle
target sequence
sequence
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111172494.8A
Other languages
Chinese (zh)
Other versions
CN114037972A (en
Inventor
任聪
付斌
沈忱
钟小凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voyah Automobile Technology Co Ltd
Original Assignee
Voyah Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voyah Automobile Technology Co Ltd filed Critical Voyah Automobile Technology Co Ltd
Priority to CN202111172494.8A priority Critical patent/CN114037972B/en
Publication of CN114037972A publication Critical patent/CN114037972A/en
Application granted granted Critical
Publication of CN114037972B publication Critical patent/CN114037972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a target detection method, a device, equipment and a readable storage medium, wherein the target detection method comprises the following steps: acquiring a first target sequence of an obstacle target acquired by a radar; acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model; after the first target sequence and the second target sequence complete space-time synchronization, performing target matching on the first target sequence and the second target sequence; and outputting corresponding barrier prompt information according to the target matching result. The invention can realize the improvement of the detection precision of the vehicle perception system on targets with different scales, especially small sizes, with lower calculation resources, provides accurate data for subsequent decision and planning, avoids the missing detection of the obstacle targets or the collision caused by difficult decision on the obstacle targets, and further improves the reliability and safety of intelligent auxiliary driving.

Description

Target detection method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of intelligent driving environment sensing, and in particular, to a target detection method, apparatus, device, and readable storage medium.
Background
With the rapid development of electric, networking, intelligent and sharing of automobiles, more and more automobile host factories or scientific research institutions are invested in a great deal of manpower to develop Advanced Driving Assistance Systems (ADAS) of intelligent vehicles. The ADAS system mainly comprises a perception system, a decision system and a planning control system, wherein the perception system is used as a basis of an automatic driving function and mainly provides real-time road environment information and obstacle information in front of the intelligent vehicle, and the detection accuracy plays a vital role in the reliability and safety of the whole ADAS system. The sensors adopted by the sensing system mainly comprise different types of sensors such as a visible light camera, an infrared camera, a millimeter wave radar, a laser radar, an ultrasonic radar and the like. The visible light camera has the advantages of low cost, mature technology, capability of accurately identifying the category information of the obstacle, and the like, but is difficult to acquire the three-dimensional information of the obstacle and is seriously dependent on the ambient light. Millimeter wave radars have strong penetrability, can work in rain, fog, smoke and low illumination environments, have obvious advantages in the aspects of distance, angle and speed detection, but are difficult to detect the types and shapes of target objects and have lower resolution.
In the prior art, most of perception schemes of intelligent vehicles have better performance under the conditions of single scene and better light, but have difficulty in obtaining better effects under actual complex roads, weather and extreme scenes. The main reason is that: firstly, the traditional perception system mainly depends on a single sensor to obtain accurate obstacle information, and has the problems of single information, insufficient robustness and the like, and has great influence on the accuracy of ADAS system planning control. Secondly, the existing sensing system adopts a fusion algorithm with millimeter wave radar as a main component and vision as an auxiliary component, but the fusion strategy is difficult to efficiently utilize the data of different sensors, and the problems of low detection precision, poor stationary target detection effect, high requirement on computing resources of a vehicle-mounted platform and the like exist. For example, if the radar detects that an obstacle is present and the camera fails to recognize the type of obstacle, then the ADAS system may risk being difficult to make a decision or collide; or if the radar does not detect the obstacle existing in front due to noise interference or micro-motion targets, but the actual obstacle targets exist objectively, at this time, in the fusion scheme of the prior art, the image recognition algorithm does not judge the type of the obstacle in the ROI (region of interest), at this time, the perception system may have a state of target omission, and the safety and reliability of intelligent auxiliary driving are seriously affected.
Disclosure of Invention
The invention mainly aims to provide a target detection method, device, equipment and readable storage medium, and aims to solve the technical problems that in the prior art, a millimeter wave radar and vision fusion-based obstacle target detection method has high requirements on platform resources, low detection precision and poor robustness and instantaneity.
In a first aspect, the present invention provides a target detection method comprising the steps of:
acquiring a first target sequence of an obstacle target acquired by a radar;
acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
after the first target sequence and the second target sequence complete space-time synchronization, performing target matching on the first target sequence and the second target sequence;
And outputting corresponding barrier prompt information according to the target matching result.
Optionally, the step of acquiring the first target sequence of the obstacle target acquired by the radar includes:
acquiring real-time environment data in front of a vehicle, which is acquired by a radar;
according to the data, calculating a target sequence of an obstacle target;
If the value of the radar reflection sectional area of the obstacle target in the target sequence is smaller than a first preset threshold value and the value of the signal to noise ratio is smaller than a second preset threshold value, judging that the target signal is a null signal;
if the value of the transverse distance between the obstacle target and the vehicle in the target sequence is larger than a third preset threshold value, judging that the target signal is a non-dangerous signal;
If the number of the accumulated detection times of the obstacle targets in the target sequence is smaller than a fourth preset threshold value, judging that the target signal is an interference signal;
And screening out the null signals, the non-dangerous signals and the interference signals in the target sequence to obtain a first target sequence of the obstacle target acquired by the radar.
Optionally, the step of acquiring the front image of the vehicle and calculating the second target sequence of the corresponding obstacle target in the image based on the convolutional neural network model includes:
Acquiring an image in front of a vehicle, and inputting the image into a convolutional neural network model after training;
performing multiple feature extraction on the obstacle target by utilizing MobileNet depth separable convolution in a downsampling mode to generate feature graphs with different scales;
Fusing feature graphs of different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information;
Convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive field sizes are generated to serve as 3 target prediction layers with different scales;
And respectively generating anchor frames on the second fusion feature graphs corresponding to each target prediction layer according to preset anchor point parameters, and identifying the obstacle targets in the anchor frames by utilizing a feature matching and non-maximum suppression method to obtain the position, category and confidence information of the obstacle targets, so as to generate a second target sequence of the corresponding obstacle targets in the image.
Optionally, the step of outputting the corresponding obstacle prompting information according to the target matching result includes:
if the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence when the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold value.
Optionally, the step of outputting the corresponding obstacle prompting information according to the target matching result includes:
If the first object sequence has a first obstacle target, generating a region of interest according to a target point corresponding to the first obstacle target in the first object sequence;
If the second object sequence does not have the second obstacle object, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle object corresponding to the region of interest.
Optionally, after the step of generating the region of interest according to the target point corresponding to the first obstacle target in the first target sequence, the method further includes:
if a second obstacle target exists in the second target sequence, judging whether the second obstacle target coincides with the region of interest;
if the two obstacle targets overlap, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence.
Optionally, after the step of determining whether the second obstacle target coincides with the region of interest if the second obstacle target exists in the second target sequence, the method further includes:
if the two types of the second obstacle targets do not coincide, outputting corresponding obstacle prompt information according to the category information of the second obstacle targets in the second target sequence when the confidence coefficient of the second obstacle targets in the second target sequence is larger than a fifth preset threshold value;
And inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding barrier prompt information after calculating the class information of the first barrier target corresponding to the region of interest.
In a second aspect, the present invention also provides an object detection apparatus, including:
The acquisition module is used for acquiring a first target sequence of the obstacle target acquired by the radar;
The calculation module is used for acquiring a front image of the vehicle and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
the matching module is used for carrying out target matching on the first target sequence and the second target sequence after the first target sequence and the second target sequence are subjected to space-time synchronization;
and the output module is used for outputting corresponding barrier prompt information according to the target matching result.
Optionally, the acquiring module is configured to:
acquiring real-time environment data in front of a vehicle, which is acquired by a radar;
according to the data, calculating a target sequence of an obstacle target;
If the value of the radar reflection sectional area of the obstacle target in the target sequence is smaller than a first preset threshold value and the value of the signal to noise ratio is smaller than a second preset threshold value, judging that the target signal is a null signal;
if the value of the transverse distance between the obstacle target and the vehicle in the target sequence is larger than a third preset threshold value, judging that the target signal is a non-dangerous signal;
If the number of the accumulated detection times of the obstacle targets in the target sequence is smaller than a fourth preset threshold value, judging that the target signal is an interference signal;
And screening out the null signals, the non-dangerous signals and the interference signals in the target sequence to obtain a first target sequence of the obstacle target acquired by the radar.
Optionally, the computing module is configured to:
Acquiring an image in front of a vehicle, and inputting the image into a convolutional neural network model after training;
performing multiple feature extraction on the obstacle target by utilizing MobileNet depth separable convolution in a downsampling mode to generate feature graphs with different scales;
Fusing feature graphs of different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information;
Convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive field sizes are generated to serve as 3 target prediction layers with different scales;
And respectively generating anchor frames on the second fusion feature graphs corresponding to each target prediction layer according to preset anchor point parameters, and identifying the obstacle targets in the anchor frames by utilizing a feature matching and non-maximum suppression method to obtain the position, category and confidence information of the obstacle targets, so as to generate a second target sequence of the corresponding obstacle targets in the image.
Optionally, the output module is configured to:
If the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence when the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold;
If the first object sequence has a first obstacle target, generating a region of interest according to a target point corresponding to the first obstacle target in the first object sequence;
If the second object sequence does not have the second obstacle object, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle object corresponding to the region of interest.
Optionally, the output module is further configured to:
if a second obstacle target exists in the second target sequence, judging whether the second obstacle target coincides with the region of interest;
if the two obstacle targets overlap, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence;
if the two types of the second obstacle targets do not coincide, outputting corresponding obstacle prompt information according to the category information of the second obstacle targets in the second target sequence when the confidence coefficient of the second obstacle targets in the second target sequence is larger than a fifth preset threshold value;
And inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding barrier prompt information after calculating the class information of the first barrier target corresponding to the region of interest.
In a third aspect, the present invention also provides an object detection apparatus comprising a processor, a memory, and an object detection program stored on the memory and executable by the processor, wherein the object detection program, when executed by the processor, implements the steps of the object detection method as described above.
In a fourth aspect, the present invention further provides a readable storage medium, wherein the readable storage medium stores an object detection program, and the object detection program, when executed by a processor, implements the steps of the object detection method as described above.
The method comprises the steps of obtaining a first target sequence of an obstacle target acquired by a radar; acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model; after the first target sequence and the second target sequence complete space-time synchronization, performing target matching on the first target sequence and the second target sequence; and outputting corresponding barrier prompt information according to the target matching result. The invention can realize the improvement of the detection precision of the vehicle perception system on targets with different scales, especially small sizes, with lower calculation resources, provides accurate data for subsequent decision and planning, avoids the missing detection of the obstacle targets or the collision caused by difficult decision on the obstacle targets, and further improves the reliability and safety of intelligent auxiliary driving.
Drawings
Fig. 1 is a schematic hardware structure of an object detection device according to an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of the target detection method of the present invention;
FIG. 3 is a schematic diagram showing the relative positions of a radar and a camera in a coordinate system according to an embodiment of the present invention;
FIG. 4 is a flow chart of object matching according to an embodiment of the object detection method of the present invention;
Fig. 5 is a schematic functional block diagram of an embodiment of the object detection device of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In a first aspect, an embodiment of the present invention provides an object detection apparatus.
Referring to fig. 1, fig. 1 is a schematic hardware structure of an object detection device according to an embodiment of the present invention. In an embodiment of the present invention, the object detection device may include a processor 1001 (e.g., a central processing unit Central Processing Unit, a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communications between these components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., wireless FIdelity WIreless-FICAT interface); the memory 1005 may be a high-speed random access memory (random access memory, RAM) or a stable memory (non-volatile memory), such as a disk memory, and the memory 1005 may alternatively be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration shown in fig. 1 is not limiting of the invention and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
With continued reference to fig. 1, an operating system, a network communication module, a user interface module, and an object detection program may be included in the memory 1005, which is one type of computer storage medium in fig. 1. The processor 1001 may call the target detection program stored in the memory 1005, and execute the target detection method provided by the embodiment of the present invention.
In a second aspect, an embodiment of the present invention provides a target detection method.
Referring to fig. 2, fig. 2 is a flow chart of an embodiment of the target detection method of the present invention.
In an embodiment of the present invention, the target detection method includes:
Step S10, a first target sequence of an obstacle target acquired by a radar is acquired;
the step S10 specifically includes:
acquiring real-time environment data in front of a vehicle, which is acquired by a radar;
according to the data, calculating a target sequence of an obstacle target;
If the value of the radar reflection sectional area of the obstacle target in the target sequence is smaller than a first preset threshold value and the value of the signal to noise ratio is smaller than a second preset threshold value, judging that the target signal is a null signal;
if the value of the transverse distance between the obstacle target and the vehicle in the target sequence is larger than a third preset threshold value, judging that the target signal is a non-dangerous signal;
If the number of the accumulated detection times of the obstacle targets in the target sequence is smaller than a fourth preset threshold value, judging that the target signal is an interference signal;
And screening out the null signals, the non-dangerous signals and the interference signals in the target sequence to obtain a first target sequence of the obstacle target acquired by the radar.
In this embodiment, the millimeter wave radar acquires real-time environmental data in front of a vehicle, including obstacle targets such as a preceding vehicle and a pedestrian, and analyzes the radar signal transmission format to obtain information such as a relative distance, a relative speed, a relative angle, a reflection sectional area, a signal-to-noise ratio and the like of the obstacle targets and the position of the radar, and eliminates interference of false target signals according to the data of the relative distance, the relative speed, the relative angle, the reflection sectional area, the signal-to-noise ratio of the targets in the analyzed information, so as to obtain an effective obstacle target sequence detected by the screened radar.
Filtering a stationary target in a vehicle running environment by setting a first preset threshold value of the reflection sectional area of the target signal and a second preset threshold value of the signal-to-noise ratio of the target signal, and indicating that the target signal is an empty signal when the reflection sectional area of the target signal is smaller than the first preset threshold value and the signal-to-noise ratio of the target signal is smaller than the second preset threshold value; meanwhile, non-dangerous targets of a non-self lane and a non-adjacent lane are filtered by setting a third preset threshold value of the transverse distance between a target signal and the vehicle, wherein the transverse distance between the target signal and the vehicle is determined according to the relative distance between the target in the analyzed information and the relative displacement between the radar and the vehicle at the mounting position outside the vehicle, and when the transverse distance is larger than the third preset threshold value, the target signal is indicated to be a non-dangerous signal; meanwhile, invalid noise interference is restrained by setting a fourth preset threshold value of the accumulated detection times of the target signal, and when the accumulated detection times of the target signal are smaller than the fourth preset threshold value, the condition that the occurrence times of the target signal are invalid is indicated in a short time is indicated.
When the existence of the empty signal, the non-dangerous signal and the interference signal is judged according to the threshold value, the interference of the false target signal of the signal is screened out, and a first target sequence of the obstacle target acquired by the radar is obtained, wherein the first target sequence comprises information such as the relative distance, the relative speed, the relative angle, the reflecting sectional area, the signal-to-noise ratio and the like of the effective obstacle target relative to the position of the radar.
Step S20, acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
the step S20 specifically includes:
Acquiring an image in front of a vehicle, and inputting the image into a convolutional neural network model after training;
performing multiple feature extraction on the obstacle target by utilizing MobileNet depth separable convolution in a downsampling mode to generate feature graphs with different scales;
Fusing feature graphs of different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information;
Convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive field sizes are generated to serve as 3 target prediction layers with different scales;
And respectively generating anchor frames on the second fusion feature graphs corresponding to each target prediction layer according to preset anchor point parameters, and identifying the obstacle targets in the anchor frames by utilizing a feature matching and non-maximum suppression method to obtain the position, category and confidence information of the obstacle targets, so as to generate a second target sequence of the corresponding obstacle targets in the image.
In the embodiment, internal and external parameters of a camera are calibrated by a Zhang Dingyou calibration method to generate an internal reference matrix, an external reference matrix and a distortion matrix of the camera, wherein the internal reference matrix of the camera comprises parameters such as f x,fy,u0,v0; the camera external parameter matrix comprises an external parameter rotation matrix R and a translation matrix T; the camera distortion matrix describes lens distortion by using 5 distortion parameters, wherein Q= (k 1,k2,k3,p1,p2), an object at a corresponding position in an image and an object in a three-dimensional live-action are in one-to-one correspondence through an internal reference matrix and an external reference matrix of the camera according to an image acquired by the camera, and the relative distance and the relative speed of the object relative to the position of the camera are obtained; and obtaining a position corrected image through a distortion matrix of the camera according to the image obtained by the camera.
Based on the requirement of limited computing resources of a vehicle-mounted computing platform, a light convolutional neural network model based on deep learning is designed to realize high-precision real-time detection of obstacle targets contained in images acquired by a camera and generate a visually detected obstacle target sequence. After the current YOLOv < 4 > target detection algorithm is optimized, a convolutional neural network model which performs characteristic extraction based on MobileNet depth separable convolution and performs self-adaptive characteristic fusion on a multi-scale characteristic diagram according to different receptive fields is constructed, and the effect of obtaining a high-precision target detection result with lower computing resources is achieved.
Based on the obstacle detection requirement of the operation scene of the ADAS advanced driving assistance system of the vehicle, the data of obstacle targets in the actual road environment are collected, the obstacle targets are classified and marked, and a training database of an automatic driving perception model is constructed. Training the convolutional neural network model based on a training database of the constructed automatic driving perception model, optimizing the model by adopting a random gradient descent method and a learning rate gradual descent method, and enhancing and expanding images in a data set by adopting a plurality of online data enhancement methods. And finally, optimizing the convolutional neural network model after training by adopting a TensorRT model, and deploying the convolutional neural network model on a vehicle-mounted computing platform with limited computing resources so as to detect an obstacle target in front of the vehicle in real time.
After the camera acquires the front image of the vehicle, the acquired image is input into a convolutional neural network model which is already trained, and a MobileNet depth separable convolution is utilized to conduct multiple feature extraction on an obstacle target in a downsampling mode, so that feature maps with different scales are generated. And fusing the feature graphs with different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information. And convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive fields are generated to serve as 3 target prediction layers with different scales. And respectively generating an anchor frame on a second fusion feature map corresponding to each target prediction layer according to preset anchor point parameters, identifying the obstacle targets in the anchor frame by utilizing a feature matching and non-maximum value suppression method, so as to obtain the position, category and confidence information of the obstacle targets, and generating a second target sequence of the corresponding obstacle targets in the image acquired by the camera, wherein the second target sequence comprises the position, category and confidence information of the obstacle targets, and the relative distance and relative speed relative to the position of the camera.
Step S30, after the first target sequence and the second target sequence are subjected to space-time synchronization, performing target matching on the first target sequence and the second target sequence;
In this embodiment, step S10 obtains a first target sequence according to the obstacle target information acquired by the millimeter wave radar sensor, and step S20 obtains a second target sequence according to the obstacle target information acquired by the camera sensor, and before matching the obstacle targets, the two sensors need to be synchronized in time and space data, so as to ensure the numerical values of the measured target corresponding information of the different sensors to be converted into the same reference coordinate system.
Taking an example that the millimeter wave radar and the camera sensor of the vehicle are mounted on the central axis of the vehicle, if the relative positions of the mounting are shown in fig. 3, O r-XrYrZr represents the millimeter wave radar coordinate system, O w-XwYwZw represents the vehicle coordinate system, O c-XcYcZc represents the camera coordinate system, Z 0 represents the distance between the millimeter wave radar coordinate system and the camera coordinate system in the Z-axis direction, Z 1 represents the distance between the camera coordinate system and the vehicle coordinate system in the Z-axis direction, and H represents the distance between the millimeter wave radar coordinate system and the camera coordinate system and the vehicle coordinate system in the Y-axis direction. Since the position of the sensor is not changed after the sensor is mounted on the vehicle, the millimeter wave radar and the camera data can be spatially synchronized through the vehicle coordinate system. Meanwhile, the millimeter wave radar and the camera are time synchronized, and the time synchronization of the millimeter wave radar and the camera data is realized by adopting a multithreading working mode by taking data acquired by the millimeter wave radar with low sampling frequency as a reference.
After the first target sequence acquired by the millimeter wave radar and the second target sequence acquired by the camera are subjected to space-time synchronization in the above manner, the first target sequence acquired by the radar and the second target sequence acquired by the camera can be subjected to target matching under the same space-time.
And step S40, outputting corresponding barrier prompt information according to the target matching result.
In this embodiment, after the first target sequence acquired by the radar and the second target sequence acquired by the camera are subjected to target matching under the same time-space, four situations exist. First, a first obstacle target exists in a first target sequence acquired by a radar, a second obstacle target exists in a second target sequence acquired by a camera, and the first obstacle target and the second obstacle target coincide at the same position in the same time and space, namely the first obstacle target and the second obstacle target are the same obstacle target in the same time and space; secondly, a first obstacle target exists in a first target sequence acquired by the radar, a second obstacle target exists in a second target sequence acquired by the camera, but the first obstacle target and the second obstacle target are at different positions in the same time and space, namely, represent different positions in the same time and space, and some are the first obstacle target acquired by the radar and some are the second obstacle target acquired by the camera; thirdly, a first obstacle target exists in a first target sequence acquired by the radar, but a second obstacle target does not exist in a second target sequence acquired by the camera, namely, only the radar detects the first obstacle target in the same time-space; fourth, there is a second obstacle target in the second target sequence acquired by the camera, but there is no first obstacle target in the first target sequence acquired by the radar, that is, it means that only the camera detects the second obstacle target in the same time and space. And outputting corresponding barrier prompt information according to the four different target matching conditions.
Taking fig. 4 as an example, firstly, judging whether an obstacle target exists in a sequence 1 acquired by a radar; if the obstacle targets exist in the sequence 1, generating an ROI (region of interest) according to the target points by using the obstacle targets existing in the sequence 1; judging whether an obstacle target exists in the sequence 2 acquired by the camera; if the sequence 2 does not have an obstacle target, inputting an image of the ROI area into a convolutional neural network model to obtain the category of the obstacle target because the sequence 1 does not contain category information of the obstacle target; if the sequence 2 has an obstacle target, judging whether the obstacle target of the sequence 2 coincides with the ROI area; if the two objects are overlapped, namely the two objects are expressed as the same obstacle object, the category of the obstacle object is directly obtained according to the sequence 2 because the sequence 2 contains category information of the obstacle object; if the two images are not overlapped, the images are expressed as different obstacle targets, because the sequence 1 does not contain category information of the obstacle targets, the images of the ROI areas are input into a convolutional neural network model to obtain categories of the obstacle targets in the sequence 1, meanwhile, because the obstacle targets in the sequence 2 are monitored by a camera only, whether the targets are reliable or not is judged according to confidence information of the obstacle targets contained in the sequence 2, and when the confidence is larger than X, the categories of the obstacle targets contained in the sequence 2 are directly output.
If the sequence 1 does not have an obstacle target, judging whether the sequence 2 acquired by the camera has the obstacle target or not; if the sequence 2 has an obstacle target, because the obstacle target in the sequence 2 is monitored by a camera only, judging whether the target is credible according to the confidence information of the obstacle target contained in the sequence 2, and directly outputting the category of the obstacle target contained in the sequence 2 when the confidence is larger than X; if the sequence 2 does not have the obstacle target, the current space-time is indicated that the obstacle target does not exist.
Further, in an embodiment, the step S40 includes:
if the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence when the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold value.
In this embodiment, if the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, it means that only the camera detects the second obstacle target in the current same time, and the result of target matching needs to be obtained according to the confidence level of the second obstacle target in the second target sequence. When the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold value, outputting corresponding obstacle prompt information including the category of the obstacle target and the distance and speed of the obstacle target relative to the position of the camera according to the category information of the second obstacle target in the second target sequence.
Further, in an embodiment, the step S40 includes:
If the first object sequence has a first obstacle target, generating a region of interest according to a target point corresponding to the first obstacle target in the first object sequence;
If the second object sequence does not have the second obstacle object, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle object corresponding to the region of interest.
In this embodiment, if the first target sequence has a first obstacle target, and meanwhile, if the second target sequence does not have a second obstacle target, it means that only the radar detects the first obstacle target in the current same time. At this time, the distance and the speed of the first obstacle target relative to the position of the radar can be obtained according to the first target sequence, but the type of the first obstacle target cannot be determined specifically, a region of interest needs to be generated according to the target point corresponding to the first obstacle target in the first target sequence, an image of the region of interest is input into a convolutional neural network model, and after the type information of the first obstacle target corresponding to the region of interest is obtained by calculation, corresponding obstacle prompt information including the type of the obstacle target and the distance and the speed of the obstacle target relative to the position of the camera is output.
Further, in an embodiment, after the step of generating the region of interest according to the target point corresponding to the first obstacle target in the first target sequence, the method further includes:
if a second obstacle target exists in the second target sequence, judging whether the second obstacle target coincides with the region of interest;
if the two obstacle targets overlap, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence.
In this embodiment, if the first target sequence has a first obstacle target, and meanwhile, if the second target sequence has a second obstacle target, this means that the air radar detects the first obstacle target at the same time at present, and the camera also detects the second obstacle target. And generating an interested region according to a target point corresponding to the first obstacle target in the first target sequence, and judging whether the second obstacle target coincides with the interested region.
If the first obstacle target and the second obstacle target are overlapped, namely the first obstacle target and the second obstacle target are identical in time and space, outputting corresponding obstacle prompting information according to category information of the second obstacle target in the second target sequence, wherein the corresponding obstacle prompting information comprises categories of the obstacle targets, and distances and speeds of the obstacle targets relative to the position of the camera.
Further, in an embodiment, after the step of determining whether the second obstacle target coincides with the region of interest if the second obstacle target exists in the second target sequence, the method further includes:
if the two types of the second obstacle targets do not coincide, outputting corresponding obstacle prompt information according to the category information of the second obstacle targets in the second target sequence when the confidence coefficient of the second obstacle targets in the second target sequence is larger than a fifth preset threshold value;
And inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding barrier prompt information after calculating the class information of the first barrier target corresponding to the region of interest.
In this embodiment, if the first target sequence has a first obstacle target, and meanwhile, if the second target sequence has a second obstacle target, this means that the air radar detects the first obstacle target at the same time at present, and the camera also detects the second obstacle target. And generating an interested region according to a target point corresponding to the first obstacle target in the first target sequence, and judging whether the second obstacle target coincides with the interested region. If the two obstacle targets are not overlapped, at different positions in the same space and time, some of the two obstacle targets are first obstacle targets acquired by the radar and the other of the two obstacle targets is a second obstacle target acquired by the camera.
If the first obstacle target is acquired by the radar, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle target corresponding to the region of interest, wherein the corresponding obstacle prompt information comprises the category of the obstacle target, and the distance and the speed of the obstacle target relative to the position of the camera.
If the second obstacle target is acquired by the camera, a target matching result is required to be obtained according to the confidence level of the second obstacle target in the second target sequence. When the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold value, outputting corresponding obstacle prompt information including the category of the obstacle target and the distance and speed of the obstacle target relative to the position of the camera according to the category information of the second obstacle target in the second target sequence.
In the embodiment, a first target sequence of an obstacle target acquired by a radar is acquired; acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model; after the first target sequence and the second target sequence complete space-time synchronization, performing target matching on the first target sequence and the second target sequence; and outputting corresponding barrier prompt information according to the target matching result. The invention can realize the improvement of the detection precision of the vehicle perception system on targets with different scales, especially small sizes, with lower calculation resources, provides accurate data for subsequent decision and planning, avoids the missing detection of the obstacle targets or the collision caused by difficult decision on the obstacle targets, and further improves the reliability and safety of intelligent auxiliary driving.
In a third aspect, an embodiment of the present invention further provides an object detection apparatus.
Referring to fig. 5, a functional block diagram of an embodiment of the object detection apparatus is shown.
In this embodiment, the object detection apparatus includes:
An acquisition module 10 for acquiring a first target sequence of obstacle targets acquired by a radar;
the calculation module 20 is configured to acquire a front image of the vehicle, and calculate a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
A matching module 30, configured to perform target matching on the first target sequence and the second target sequence after the first target sequence and the second target sequence complete space-time synchronization;
and the output module 40 is configured to output corresponding obstacle prompt information according to the target matching result.
Further, in an embodiment, the obtaining module 10 is configured to:
acquiring real-time environment data in front of a vehicle, which is acquired by a radar;
according to the data, calculating a target sequence of an obstacle target;
If the value of the radar reflection sectional area of the obstacle target in the target sequence is smaller than a first preset threshold value and the value of the signal to noise ratio is smaller than a second preset threshold value, judging that the target signal is a null signal;
if the value of the transverse distance between the obstacle target and the vehicle in the target sequence is larger than a third preset threshold value, judging that the target signal is a non-dangerous signal;
If the number of the accumulated detection times of the obstacle targets in the target sequence is smaller than a fourth preset threshold value, judging that the target signal is an interference signal;
And screening out the null signals, the non-dangerous signals and the interference signals in the target sequence to obtain a first target sequence of the obstacle target acquired by the radar.
Further, in an embodiment, the computing module 20 is configured to:
Acquiring an image in front of a vehicle, and inputting the image into a convolutional neural network model after training;
performing multiple feature extraction on the obstacle target by utilizing MobileNet depth separable convolution in a downsampling mode to generate feature graphs with different scales;
Fusing feature graphs of different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information;
Convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive field sizes are generated to serve as 3 target prediction layers with different scales;
And respectively generating anchor frames on the second fusion feature graphs corresponding to each target prediction layer according to preset anchor point parameters, and identifying the obstacle targets in the anchor frames by utilizing a feature matching and non-maximum suppression method to obtain the position, category and confidence information of the obstacle targets, so as to generate a second target sequence of the corresponding obstacle targets in the image.
Further, in an embodiment, the output module 40 is configured to:
If the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence when the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold;
If the first object sequence has a first obstacle target, generating a region of interest according to a target point corresponding to the first obstacle target in the first object sequence;
If the second object sequence does not have the second obstacle object, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle object corresponding to the region of interest.
Further, in an embodiment, the output module 40 is further configured to:
if a second obstacle target exists in the second target sequence, judging whether the second obstacle target coincides with the region of interest;
if the two obstacle targets overlap, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence;
if the two types of the second obstacle targets do not coincide, outputting corresponding obstacle prompt information according to the category information of the second obstacle targets in the second target sequence when the confidence coefficient of the second obstacle targets in the second target sequence is larger than a fifth preset threshold value;
And inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding barrier prompt information after calculating the class information of the first barrier target corresponding to the region of interest.
The function implementation of each module in the above-mentioned object detection device corresponds to each step in the above-mentioned object detection method embodiment, and the function and implementation process thereof are not described in detail herein.
In a fourth aspect, embodiments of the present invention also provide a readable storage medium.
The readable storage medium of the present invention stores an object detection program, wherein the object detection program, when executed by a processor, implements the steps of the object detection method as described above.
The method implemented when the object detection program is executed may refer to various embodiments of the object detection method of the present invention, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising several instructions for causing a terminal device to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. A target detection method, characterized in that the target detection method comprises:
acquiring a first target sequence of an obstacle target acquired by a radar;
acquiring a front image of a vehicle, and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
after the first target sequence and the second target sequence complete space-time synchronization, performing target matching on the first target sequence and the second target sequence;
outputting corresponding barrier prompt information according to the target matching result;
The step of obtaining the vehicle front image and calculating a second target sequence of the corresponding obstacle target in the image based on the convolutional neural network model comprises the following steps:
Acquiring an image in front of a vehicle, and inputting the image into a convolutional neural network model after training;
performing multiple feature extraction on the obstacle target by utilizing MobileNet depth separable convolution in a downsampling mode to generate feature graphs with different scales;
Fusing feature graphs of different scales generated after the last 3 times of feature extraction in an up-sampling mode to generate a first fused feature graph of 3 layers of different semantics and position information;
Convolution with different expansion rates is applied to each layer of first fusion feature map, adaptive feature fusion is carried out by combining corresponding fusion weights, and 3 second fusion feature maps with different receptive field sizes are generated to serve as 3 target prediction layers with different scales;
And respectively generating anchor frames on the second fusion feature graphs corresponding to each target prediction layer according to preset anchor point parameters, and identifying the obstacle targets in the anchor frames by utilizing a feature matching and non-maximum suppression method to obtain the position, category and confidence information of the obstacle targets, so as to generate a second target sequence of the corresponding obstacle targets in the image.
2. The object detection method according to claim 1, wherein the step of acquiring the first object sequence of the obstacle object acquired by the radar includes:
acquiring real-time environment data in front of a vehicle, which is acquired by a radar;
according to the data, calculating a target sequence of an obstacle target;
if the value of the radar reflection sectional area of the obstacle target in the target sequence is smaller than a first preset threshold value and the value of the signal to noise ratio is smaller than a second preset threshold value, judging that the target signal is an empty signal;
If the value of the transverse distance between the obstacle target and the vehicle in the target sequence is larger than a third preset threshold value, judging that the target signal is a non-dangerous signal;
If the number of the accumulated detection times of the obstacle targets in the target sequence is smaller than a fourth preset threshold value, determining that the target signals are interference signals;
And screening out the null signals, the non-dangerous signals and the interference signals in the target sequence to obtain a first target sequence of the obstacle target acquired by the radar.
3. The method of claim 1, wherein the step of outputting the corresponding obstacle prompting message according to the result of the target matching comprises:
if the first target sequence does not have the first obstacle target, but the second target sequence has the second obstacle target, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence when the confidence coefficient of the second obstacle target in the second target sequence is larger than a fifth preset threshold value.
4. The method of claim 1, wherein the step of outputting the corresponding obstacle prompting message according to the result of the target matching comprises:
If the first object sequence has a first obstacle target, generating a region of interest according to a target point corresponding to the first obstacle target in the first object sequence;
If the second object sequence does not have the second obstacle object, inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding obstacle prompt information after calculating the category information of the first obstacle object corresponding to the region of interest.
5. The object detection method according to claim 4, further comprising, after the step of generating the region of interest from the target point corresponding to the first obstacle object in the first object sequence:
if a second obstacle target exists in the second target sequence, judging whether the second obstacle target coincides with the region of interest;
if the two obstacle targets overlap, outputting corresponding obstacle prompt information according to the category information of the second obstacle target in the second target sequence.
6. The object detection method according to claim 5, further comprising, after the step of determining whether the second obstacle object coincides with the region of interest if the second obstacle object exists in the second object sequence:
if the two types of the second obstacle targets do not coincide, outputting corresponding obstacle prompt information according to the category information of the second obstacle targets in the second target sequence when the confidence coefficient of the second obstacle targets in the second target sequence is larger than a fifth preset threshold value;
And inputting the image of the region of interest into a convolutional neural network model, and outputting corresponding barrier prompt information after calculating the class information of the first barrier target corresponding to the region of interest.
7. An object detection device which detects using the object detection method according to any one of claims 1 to 6, characterized in that the object detection device comprises:
The acquisition module is used for acquiring a first target sequence of the obstacle target acquired by the radar;
The calculation module is used for acquiring a front image of the vehicle and calculating a second target sequence of a corresponding obstacle target in the image based on a convolutional neural network model;
the matching module is used for carrying out target matching on the first target sequence and the second target sequence after the first target sequence and the second target sequence are subjected to space-time synchronization;
and the output module is used for outputting corresponding barrier prompt information according to the target matching result.
8. An object detection device comprising a processor, a memory, and an object detection program stored on the memory and executable by the processor, wherein the object detection program, when executed by the processor, implements the steps of the object detection method according to any one of claims 1 to 6.
9. A readable storage medium, wherein an object detection program is stored on the readable storage medium, wherein the object detection program, when executed by a processor, implements the steps of the object detection method according to any one of claims 1 to 6.
CN202111172494.8A 2021-10-08 2021-10-08 Target detection method, device, equipment and readable storage medium Active CN114037972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111172494.8A CN114037972B (en) 2021-10-08 2021-10-08 Target detection method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111172494.8A CN114037972B (en) 2021-10-08 2021-10-08 Target detection method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114037972A CN114037972A (en) 2022-02-11
CN114037972B true CN114037972B (en) 2024-08-13

Family

ID=80134768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111172494.8A Active CN114037972B (en) 2021-10-08 2021-10-08 Target detection method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114037972B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454874B (en) * 2022-02-21 2023-06-23 岚图汽车科技有限公司 Method and system for preventing sudden braking during automatic parking
CN114596706B (en) * 2022-03-15 2024-05-03 阿波罗智联(北京)科技有限公司 Detection method and device of road side perception system, electronic equipment and road side equipment
CN114572233B (en) * 2022-03-25 2022-11-29 阿波罗智能技术(北京)有限公司 Model set-based prediction method, electronic equipment and automatic driving vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035711A1 (en) * 2016-08-23 2018-03-01 深圳市速腾聚创科技有限公司 Target detection method and system
CN106908783B (en) * 2017-02-23 2019-10-01 苏州大学 Based on obstacle detection method combined of multi-sensor information
CN109298415B (en) * 2018-11-20 2020-09-22 中车株洲电力机车有限公司 Method for detecting obstacles on track and road
CN110188696B (en) * 2019-05-31 2023-04-18 华南理工大学 Multi-source sensing method and system for unmanned surface equipment
CN111091591B (en) * 2019-12-23 2023-09-26 阿波罗智联(北京)科技有限公司 Collision detection method and device, electronic equipment and storage medium
CN111401208B (en) * 2020-03-11 2023-09-22 阿波罗智能技术(北京)有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN112115977B (en) * 2020-08-24 2024-04-02 重庆大学 Target detection algorithm based on scale invariance and feature fusion
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method

Also Published As

Publication number Publication date
CN114037972A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN114037972B (en) Target detection method, device, equipment and readable storage medium
CN105892471B (en) Automatic driving method and apparatus
JP6682833B2 (en) Database construction system for machine learning of object recognition algorithm
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN109583416B (en) Pseudo lane line identification method and system
CN110909705B (en) Road side parking space sensing method and system based on vehicle-mounted camera
US20150206431A1 (en) Apparatus and method for providing safe driving information
CN112949782A (en) Target detection method, device, equipment and storage medium
US12125269B2 (en) Sensor fusion
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN113610143A (en) Method, device, equipment and storage medium for classifying point cloud noise points
CN117111085A (en) Automatic driving automobile road cloud fusion sensing method
CN117452410A (en) Millimeter wave radar-based vehicle detection system
CN117727011A (en) Target identification method, device, equipment and storage medium based on image fusion
US11269059B2 (en) Locating and/or classifying objects based on radar data, with improved reliability at different distances
EP3467545A1 (en) Object classification
CN112529011B (en) Target detection method and related device
CN113884090A (en) Intelligent platform vehicle environment sensing system and data fusion method thereof
CN118463965A (en) Positioning accuracy evaluation method and device and vehicle
CN117173666A (en) Automatic driving target identification method and system for unstructured road
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN110556024B (en) Anti-collision auxiliary driving method and system and computer readable storage medium
CN110706374B (en) Motion state prediction method and device, electronic equipment and vehicle
CN112883846A (en) Three-dimensional data acquisition imaging system for detecting vehicle front target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant