CN114882717B - Object detection system and method based on vehicle-road cooperation - Google Patents
Object detection system and method based on vehicle-road cooperation Download PDFInfo
- Publication number
- CN114882717B CN114882717B CN202210255205.9A CN202210255205A CN114882717B CN 114882717 B CN114882717 B CN 114882717B CN 202210255205 A CN202210255205 A CN 202210255205A CN 114882717 B CN114882717 B CN 114882717B
- Authority
- CN
- China
- Prior art keywords
- data
- state prediction
- detection
- vehicle
- prediction data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 198
- 238000000034 method Methods 0.000 title abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 130
- 238000012545 processing Methods 0.000 claims description 26
- 238000012544 monitoring process Methods 0.000 claims description 17
- 238000005457 optimization Methods 0.000 claims description 4
- 239000013589 supplement Substances 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/91—Radar or analogous systems specially adapted for specific applications for traffic control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096775—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is a central station
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an object detection system and method based on vehicle-road cooperation, comprising a cloud server and at least one automatic driving vehicle arranged in an airport, wherein each automatic driving vehicle is provided with at least one vehicle-mounted sensor, and the automatic driving vehicle comprises a first data acquisition module, a second data acquisition module and a local data fusion module. The object detection method comprises the following steps: the cloud server generates service end fusion data according to scene detection data sent by the A-SMGCS system; at least one automatic driving vehicle in an airport subscribes to and acquires corresponding service end fusion data stored on the cloud server, then acquires local detection data through a vehicle-mounted sensor, acquires current vehicle end fusion data, and updates the current vehicle end fusion data by utilizing the local detection data and the service end fusion data. The invention has low cost, small blind area of cooperative sensing of the vehicle and the road, and can obtain the real-time position of the dynamic object more accurately in advance, thereby having good safety.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to an object detection system and method based on vehicle-road cooperation.
Background
The automatic driving automobile is also called an unmanned automobile, a computer driving automobile or a wheel type mobile robot, and is an intelligent automobile for realizing unmanned through a computer system. Decades of history have been in the 20 th century, and the 21 st century has shown a trend towards practical use. The automatic driving automobile is cooperated with the global positioning system by means of artificial intelligence, visual computing, radar, monitoring device, so that the computer can automatically operate the motor vehicle without any active operation of human beings.
Autonomous vehicles typically utilize self-contained onboard sensors (e.g., lidar, cameras, etc.) for self-positioning and surrounding obstacle detection. At present, the vehicle-mounted sensor can reach a relatively accurate detection distance of 100-200 meters, and the safety range of detecting obstacles and braking can be reached in a general environment, wherein the safety range comprises the condition of detecting the pedestrian passing through and transversely driving the vehicle at an intersection.
This approach suffers from several technical drawbacks: the method comprises the following steps that (1) a vehicle-mounted sensor has a large blind area; the vehicle-mounted sensor generally performs position and obstacle detection by using the principle of linear propagation of light, electromagnetic waves, sound waves and the like, and needs to be mounted on a vehicle body; therefore, the obstacle is easily shielded by the shielding object, so that the obstacle behind the shielding object cannot be perceived; this drawback is particularly evident in complex airport environments, due to the large number of shades present; therefore, the detection efficiency of the vehicle-mounted sensor in an airport environment is far lower than that of the vehicle-mounted sensor on an open road; (2) The vehicle-mounted sensor has limited detection capability on dynamic objects; the vehicle-mounted sensor can not timely detect dynamic objects in the environment due to the limitation of detection frequency and processing capacity, so that safety accidents occur; (3) the overall cost of the vehicle-mounted sensor is high; in-vehicle sensors generally have high cost, and when only in-vehicle sensors are used for detecting positions and obstacles, the corresponding sensors need to be installed in each vehicle, so that the overall cost is increased due to the increase of the automatically driven vehicles. In addition, a large number of in-vehicle sensors may interfere with each other when operating in the vicinity.
In particular, in an airport environment, the safety distance requirement of an aircraft is very high, the detection distance of 200 meters cannot meet the safety requirement, and the position and speed of a vehicle and an airplane need to be detected at an earlier time, so that the corresponding detection result is utilized to carry out the prior path planning and path adjustment on the autopilot automobile.
Therefore, the invention provides an object detection system and method based on vehicle-road cooperation in an airport environment, which integrates the existing field monitoring radar and response equipment of the airport, and fuses the information obtained by the existing equipment with the self-perception information of an automatic driving vehicle, so that the vehicle and the aircraft in the airport environment are subjected to more accurate position detection, the safety is ensured, and the working efficiency is improved.
Disclosure of Invention
The invention provides an object detection system and method based on vehicle-road cooperation, which solve the technical problems.
The technical scheme for solving the technical problems is as follows:
an object detection system based on vehicle-road cooperation comprises a cloud server and at least one automatic driving vehicle arranged in an airport, wherein each automatic driving vehicle is provided with at least one vehicle-mounted sensor, the automatic driving vehicle comprises a first data acquisition module, a second data acquisition module and a local data fusion module,
The first data acquisition module is used for connecting a cloud server and acquiring service end fusion data generated by the cloud server according to scene detection data;
The second data acquisition module is used for connecting with the vehicle-mounted sensor and acquiring local detection data acquired by the vehicle-mounted sensor;
the local data fusion module is used for acquiring current vehicle-end fusion data and updating the current vehicle-end fusion data by utilizing the local detection data and the service-end fusion data.
In a preferred embodiment, the cloud server is connected to at least one scene device through an a-SMGCS system, where the a-SMGCS system is configured to obtain status information, a global identity ID, a detection time, and a detection reliability of at least one object collected by each scene device in an airport, combine the status information of the corresponding object according to the global identity ID, the detection time, and the detection reliability, correlate the global identity ID, the detection time, the detection reliability, and the corresponding processing result, generate an optimized detection result of each object, and send scene detection data including all the optimized detection results to the cloud server.
In a preferred embodiment, the scene device comprises any one or more of a field surveillance radar, a multi-point positioning system and a broadcast automatic monitoring system.
In a preferred embodiment, the cloud server specifically includes:
the acquisition module is used for acquiring scene detection data in an airport at a preset frequency;
The global data fusion module is used for acquiring the current service end fusion data, predicting the state of the corresponding object at the current moment according to each optimized detection result in the scene detection data, and updating the current service end fusion data according to the prediction results.
In a preferred embodiment, the cloud server further includes an optimizing module, where the optimizing module is configured to obtain raw data collected by the scene device, supplement and correct scene detection data generated by the a-SMGCS system according to the raw data, generate optimized scene detection data, and send the optimized scene detection data to the global data fusion module.
In a preferred embodiment, the global data fusion module specifically includes:
The first prediction unit is used for generating first state prediction data of each object at the current moment according to the state information of the object at the corresponding detection time in the scene detection data, and forming a first state prediction data set;
the second prediction unit is used for acquiring the current service end fusion data, generating second state prediction data of each object in the current service end fusion data at the current moment, and forming a second state prediction data set;
The first matching unit is used for matching the first state prediction data set and the second state prediction data set, if the first state prediction data set and the second state prediction data set contain target objects with the same global identity ID, the first updating unit is executed, and if the first state prediction data set and the second state prediction data set contain target objects with the same global identity ID, the second updating unit and/or the third updating unit are executed;
the first updating unit is used for acquiring target state prediction data with higher detection reliability from the first state prediction data and the second state prediction data of the target object, associating the target state prediction data with the global identity ID of the target object, updating the target state prediction data to the second state prediction data set and generating new service end fusion data;
A second updating unit, configured to directly update, when it is determined according to the global identity ID that any object in the first state prediction data set does not exist in the second state prediction data set, the first state prediction data of the corresponding object to the second state prediction data set, so as to generate new server fusion data;
And a third updating unit, configured to, when it is determined according to the global identity ID that any object in the second state prediction data set does not exist in the first state prediction data set, and the number of cyclic matching times reaches a preset threshold, delete data corresponding to the object in the second state prediction data set, and generate new server fusion data.
In a preferred embodiment, the autonomous vehicle further comprises a control module for performing path planning, path adjustment and navigation obstacle avoidance on the own vehicle according to the object detection result.
In a preferred embodiment, the local data fusion module specifically includes:
the data processing unit is used for carrying out combination processing on the local detection data and the server fusion data and generating combination processing data with uniform format; the local detection data comprises state information, local identity ID, detection time and detection reliability of at least one object;
The third prediction unit is used for generating third state prediction data of each object at the current moment according to the state information of the object at the corresponding detection time in the combined processing data, and forming a third state prediction data set;
The fourth prediction unit is used for acquiring current vehicle-end fusion data, generating fourth state prediction data of each object in the current vehicle-end fusion data at the current moment, and forming a fourth state prediction data set;
The second matching unit is used for matching the third state prediction data set and the fourth state prediction data set, if the third state prediction data set and the fourth state prediction data set contain target objects with the same prediction positions, the fourth updating unit is executed, and if the third state prediction data set and the fourth state prediction data set contain target objects with the same prediction positions, the fifth updating unit and/or the sixth updating unit is executed;
A fourth updating unit, configured to determine whether the ID of the target object is the same, and if the ID is the same and only one ID is a global ID, acquire target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the target object, associate the target state prediction data with the global ID of the target object, and update the target state prediction data to the fourth state prediction data set;
if the identity IDs are different and are global identity IDs, the third state prediction data of the corresponding object are directly updated to the fourth state prediction data set;
If the state information is any other state information, acquiring target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the corresponding object, associating the target state prediction data with the corresponding identity ID of the current vehicle-end fusion data, and updating the target state prediction data to the fourth state prediction data set;
A fifth updating unit, configured to directly update third state prediction data of a corresponding object to the fourth state prediction data set to form new vehicle-end fusion data when it is determined that any object in the third state prediction data set does not exist in the fourth state prediction data set according to the prediction position;
And a sixth updating unit, configured to, when it is determined according to the predicted position that any object in a fourth state prediction data set does not exist in the third state prediction data set, and the number of cycle matching times reaches a preset threshold, delete data corresponding to the object in the fourth state prediction data set, and form new vehicle end fusion data.
The second aspect of the embodiment of the invention provides an object detection method based on vehicle-road cooperation, which comprises the following steps:
step 1, a cloud server generates service end fusion data according to scene detection data sent by an A-SMGCS system;
And 2, subscribing and acquiring corresponding service end fusion data stored on the cloud server by at least one automatic driving vehicle in an airport, acquiring local detection data by a vehicle-mounted sensor, acquiring current vehicle end fusion data, and updating the current vehicle end fusion data by utilizing the local detection data and the service end fusion data.
In a preferred embodiment, the a-SMGCS system transmitting scene detection data to the cloud server comprises the steps of: acquiring state information, global identity ID, detection time and detection reliability of at least one object acquired by each scene device in an airport, merging the state information of the corresponding object according to the global identity ID, the detection time and the detection reliability, then correlating the global identity ID, the detection time, the detection reliability and the corresponding processing result to generate an optimized detection result of each object, and transmitting scene detection data containing all the optimized detection results to the cloud server.
Compared with the prior art, the invention has the beneficial effects that:
the first, the invention has small blind area of the cooperative sensing of the vehicle and the road. The invention reduces the monitoring blind area of the automatic driving vehicle by two methods. Firstly, because the field monitoring radar is arranged on a higher tower, the position shielding object under high elevation is smaller, and vehicles and aircrafts in the surrounding area of the radar can be monitored more comprehensively under the condition that no high-rise building exists in the airport environment; and the detection distance of electromagnetic waves adopted by the radar is far longer than that of the vehicle-mounted sensor. Second, the multi-point positioning system and ADS-B system can automatically report the position of the vehicle and aircraft by the response of the vehicle and aircraft when the vehicle or aircraft is in an area where the presence radar is not able to detect. Therefore, there is substantially no blind area for the vehicle and the aircraft.
Secondly, the invention can obtain the moving direction of the dynamic object more accurately in advance, and improves the safety and the passing efficiency of the automatic driving vehicle. Because the field monitoring radar, the multi-point positioning system or the ADS-B system continuously tracks the position of the target object, the running direction of the target object can be predicted even if the target object is in the dead zone of the vehicle-mounted sensor. By setting the global data fusion module and the local data fusion module, the data is updated in time, and the real-time performance and accuracy of prediction are improved. And the corresponding prediction results can be utilized to carry out path planning, path adjustment and navigation obstacle avoidance on the automatic driving vehicle in advance.
Third, the invention has lower cost. In an airport environment, the field monitoring radar, the multi-point positioning system and the ADS-B system are all ready systems, and are not required to be deployed again, and corresponding signals can be used for a plurality of automatic driving vehicles. With this system, there is no need to install too many sensors on the autonomous vehicle, and therefore the cost can be much lower than an autonomous system that relies entirely on-board sensors.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic block diagram of an object detection system based on vehicle-road cooperation according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a global data fusion module in the vehicle-road collaboration-based object detection system according to embodiment 2 of the present invention;
fig. 3 is a schematic structural diagram of a local data fusion module in the vehicle-road collaboration-based object detection system according to embodiment 3 of the present invention;
Fig. 4 is a flow chart of an object detection method based on vehicle-road cooperation according to embodiment 4 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantageous technical effects of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and detailed description. It should be understood that the detailed description is intended to illustrate the invention, and not to limit the invention.
Embodiment 1 of the present invention provides an object detection system based on vehicle-road cooperation, which, as shown in fig. 1, at least includes a cloud server 6 and at least one autonomous vehicle 7 disposed in an airport. By way of example, each of the autonomous vehicles 7 may include, but is not limited to, at least one onboard sensor 71, a first data acquisition module 72, a second data acquisition module 73, a local data fusion module 74. The autonomous vehicle 7 in fig. 1 is an example of a vehicle-based cooperative object detection system, and does not constitute a limitation of the autonomous vehicle 7 in the vehicle-based cooperative object detection system, and may include more or less components than those illustrated, or may combine some components, or different components, for example, the autonomous vehicle 7 in the vehicle-based cooperative object detection system may further include a power management module, an arithmetic processing module, a communication module, a data storage module, an input-output device, a bus, and the like. In particular the number of the elements,
The first data obtaining module 72 is configured to connect to the cloud server 6, and obtain service end fusion data generated by the cloud server 6 according to the scene detection data;
The second data acquisition module 73 is used for connecting with the vehicle-mounted sensor 71 to acquire local detection data acquired by the vehicle-mounted sensor 71;
the local data fusion module 74 is configured to obtain current vehicle-end fusion data, and update the current vehicle-end fusion data with the local detection data and the service-end fusion data.
The embodiment provides an object detection system based on vehicle-road cooperation, which can simultaneously acquire local detection data acquired by a vehicle-mounted sensor, service end fusion data generated by a cloud server according to scene detection data and current vehicle end fusion data, update the current vehicle end fusion data by utilizing the local detection data and the service end fusion data, improve the instantaneity and accuracy of the current vehicle end fusion data, and can predict the moving direction of dynamic objects in an airport more accurately in advance according to the current vehicle end fusion data, thereby improving the safety and passing efficiency of automatic driving vehicles.
In a preferred embodiment, the cloud server 6 may connect at least one scene device provided in the airport through an a-SMGCS system of the airport and acquire scene detection data through the a-SMGCS system.
The scene equipment comprises any one or more of a scene monitoring radar 2, a multi-point positioning system 3 and a broadcast automatic monitoring system 4, the deployment cost is low, and the blind area of cooperative sensing of the vehicle and the road is small. In particular, in a typical airport environment, in order to secure airports, and in particular aircraft, the airport's existing advanced ground traffic management system (a-SMGCS) has integrated various types of field-side devices and vehicle-mounted devices for real-time location detection of all vehicles and aircraft on the airport. These devices include: (1) One or more S-band field surveillance radars (Surface Monitoring Radar, PRIMARY RADAR) for cross-scanning and surveillance of the airport; (2) A multi-point positioning system (MLAT), which is characterized in that a transponder is arranged on an aircraft and a vehicle, ground stations (a transmitter and a receiver) are deployed in the blind area of a field monitoring radar, the accurate positioning is realized by utilizing the time difference (TDOA positioning) of the arrival of the response signal at the receiver, the target is identified according to the address code in the response code, and the multi-point positioning transponder is compatible with the response signals in A/C, S, ADS-B modes; (3) ADS-B (Automatic Dependent Surveillance Broadcast), a short term of broadcasting type automatic monitoring system, is composed of multiple ground stations and airborne stations, completes data two-way communication in a net-shaped and multipoint-to-multipoint mode, and the ADS-B system is an information system integrating communication and monitoring, and is composed of an information source, an information transmission channel and an information processing and displaying part, and is generally used for aircrafts, and the airborne ADS-B equipment broadcasts information containing the position of the airborne station. The A-SMGCS combines the three monitoring technologies, carries out relevant and comprehensive processing on the three information, and carries out serial number identification and tracking on the aircraft and the vehicle.
In this embodiment, the a-SMGCS system obtains state information, global identity ID, detection time, and detection reliability of at least one object collected by each scene device in the airport. The state information includes object size (length, width, height), velocity, and object center pose, including position and direction, which can be characterized by longitude, latitude, and altitude. The global identity ID is a fixed unique number of a target object in an airport and is used for identifying different target objects detected by a field monitoring radar, a multi-point positioning system and an ADS-B. The detection time refers to the time when the object is detected. The detection reliability is determined according to the covariance (such as pose covariance and velocity covariance) of the detection result, the smaller the covariance of the detection result is, the higher the detection reliability is, the range of the covariance is determined by the characteristics of different sensors, such as the position detection reliability of the laser radar is high, and the velocity detection reliability of the millimeter wave radar is high. And the A-SMGCS system performs merging processing on the state information of the corresponding object according to the acquired global identity ID, detection time and detection reliability, then associates the global identity ID, the detection time, the detection reliability and the corresponding processing result, generates an optimized detection result of each object, and sends scene detection data containing all the optimized detection results to the cloud server.
And then the cloud server generates service end fusion data according to the scene detection data.
In a preferred embodiment, the cloud server 6 specifically includes:
an acquisition module 61 for acquiring scene detection data in an airport at a preset frequency;
The global data fusion module 63 is configured to obtain current service end fusion data, predict a state of a corresponding object at a current moment according to each optimized detection result in the scene detection data, and update the current service end fusion data according to the prediction result.
In a preferred embodiment, the cloud server further includes an optimizing module 62, where the optimizing module 62 is configured to obtain the original data collected by the scene device, supplement and correct the scene detection data generated by the a-SMGCS system 5 according to the original data, generate optimized scene detection data, and send the optimized scene detection data to the global data fusion module 63.
In the above embodiment, the original scene detection data collected by the collection module 61 is supplemented and corrected by the optimization module 62 to generate optimized scene detection data, then the global data fusion module 63 obtains the current service end fusion data, predicts the state of the corresponding object at the current moment according to the optimized detection result, timely updates the current service end fusion data according to the prediction result, supplements new and redundant data, and improves the real-time performance and accuracy of the current service end fusion data. In other embodiments, the optimization module 62 may not be provided, that is, the global data fusion module 63 may directly generate the server fusion data according to the scene detection data provided by the a-SMGCS system.
Fig. 2 is a schematic structural diagram of a global data fusion module 63 in the vehicle-road collaboration-based object detection system provided in embodiment 2 of the present invention, and as shown in fig. 2, the global data fusion module 63 specifically includes:
a first prediction unit 631, configured to generate first state prediction data of each object at the current moment according to state information of the object at the corresponding detection time in the scene detection data, and form a first state prediction data set;
A second prediction unit 632, configured to obtain current server fusion data, generate second state prediction data of each object in the current server fusion data at the current time, and form a second state prediction data set;
A first matching unit 633, configured to match the first state prediction data set with the second state prediction data set, if the first state prediction data set includes a target object with the same global identity ID, execute a first updating unit, or execute a second updating unit or a third updating unit;
a first updating unit 634, configured to obtain target state prediction data with higher detection reliability from the first state prediction data and the second state prediction data of the target object, associate the target state prediction data with a global identity ID of the target object, and update the target state prediction data to the second state prediction data set to generate new server fusion data;
A second updating unit 635, configured to, when it is determined according to the global identity ID that any object in the first state prediction data set does not exist in the second state prediction data set, directly update the first state prediction data of the corresponding object into the second state prediction data set, and generate new server fusion data;
And a third updating unit 636, configured to, when it is determined according to the global identity ID that any object in the second state prediction data set does not exist in the first state prediction data set and the number of cyclic matching times reaches a preset threshold, delete the data corresponding to the object in the second state prediction data set, and generate new server fusion data.
In a specific embodiment, the first matching unit 633 searches and matches the first state prediction data set and the second state prediction data set by using an algorithm such as KF, EKF, ICP. Kalman Filtering (KF) is an algorithm that utilizes a linear system state equation to optimally estimate the system state by inputting and outputting observed data through the system. The optimal estimate can also be seen as a filtering process, since the observed data includes the effects of noise and interference in the system. Extended Kalman Filter (EKF) linearization of the distribution around the current estimated mean value is then used in the prediction and update states of the kalman filter algorithm, suitable for systems with distinguishable models. The ICP algorithm is based on a data registration method and utilizes a closest point search method, so that the algorithm based on the free form curved surface is solved. The global data fusion module can realize data matching by utilizing a corresponding algorithm, so that an accurate judgment result of whether the two data sets contain the target objects with the same global identity ID is obtained.
According to the embodiment, the first state prediction data of each object at the current moment can be generated according to the state information of each object in the scene detection data at the corresponding detection time, the current service end fusion data are obtained, the second state prediction data of each object in the current service end fusion data at the current moment are generated, the first state prediction data set and the second state prediction data set are matched according to the global identity ID of each object, the second state prediction data set is updated according to the matching result, new service end fusion data are generated, and the instantaneity and the accuracy of the service end fusion data are improved.
The local data fusion module 74 of the automatic driving vehicle can update the current vehicle-end fusion data by using the local detection data collected by the vehicle-mounted sensor 71 and the service-end fusion data generated by the cloud server 6. Fig. 3 is a schematic structural diagram of a local data fusion module 74 in the vehicle-road collaboration-based object detection system according to embodiment 3 of the present invention, where the local data fusion module 74 specifically includes:
And the data processing unit 741 is configured to perform a merging process on the local detection data and the server fusion data, and generate merging process data with uniform format. The local detection data comprises state information, local identity ID, detection time and detection reliability of at least one object. The local identity ID here is an ID allocated to the local object after the vehicle-mounted sensor detects the local object, and is also an ID before the local object is not fused with the detection result of the server.
And a third prediction unit 742, configured to generate third state prediction data of each object at the current moment according to the state information of the object at the corresponding detection time in the combined processing data, and form a third state prediction data set.
The fourth prediction unit 743 is configured to obtain current vehicle-end fusion data, generate fourth state prediction data of each object in the current vehicle-end fusion data at the current time, and form a fourth state prediction data set.
And a second matching unit 744, configured to match the third state prediction data set with the fourth state prediction data set, execute the fourth updating unit if the target object with the same predicted position is included, and execute the fifth updating unit or the sixth updating unit if the target object with the same predicted position is not included.
A fourth updating unit 745, configured to determine whether the ID of the target object is the same, and when the ID is the same and only one ID is the global ID, obtain target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the target object, associate the target state prediction data with the global ID of the target object, and update the target state prediction data to the fourth state prediction data set. And the third state prediction data of the corresponding object is directly updated to the fourth state prediction data set when the identity IDs are different and are all global identity IDs. And when the state information is other, acquiring target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the corresponding object, associating the target state prediction data with the corresponding identity ID of the current vehicle-end fusion data, and updating the target state prediction data to the fourth state prediction data set.
And a fifth updating unit 746, configured to, when it is determined according to the predicted position that any object in the third state prediction data set does not exist in the fourth state prediction data set, directly update the third state prediction data of the corresponding object into the fourth state prediction data set, so as to form new vehicle end fusion data.
And a sixth updating unit 747, configured to, when it is determined according to the predicted position that any object in the fourth state prediction data set does not exist in the third state prediction data set and the number of cycle matching times reaches a preset threshold, delete data corresponding to the object in the fourth state prediction data set, and form new vehicle end fusion data.
In a specific embodiment, the second matching unit 744 generally searches and matches the third state prediction data set and the fourth state prediction data set by using an algorithm such as KF, EKF, ICP, so as to obtain an accurate determination result of whether the two data sets include the target object with the same predicted position.
According to the embodiment, the local detection data acquired by the vehicle-mounted sensor and the service end fusion data generated by the cloud server can be combined, the combined processing data with uniform format is generated, the third state prediction data of each object at the current moment is generated according to the state information of each object in the combined processing data at the corresponding detection time, the current vehicle end fusion data is acquired, the fourth state prediction data of each object in the current vehicle end fusion data at the current moment is generated, the third state prediction data set and the fourth state prediction data set are matched according to the prediction position and the identity ID of each object, the fourth state prediction data set is updated according to the matching result, and new vehicle end fusion data are generated, so that the real-time performance and accuracy of the vehicle end fusion data are improved.
In another preferred embodiment, the automatic driving vehicle 7 further includes a control module 75, where the control module 75 includes a path planning module, a path adjusting module, and a navigation obstacle avoidance module, and is configured to perform path planning, path adjustment, and navigation obstacle avoidance on the self-vehicle according to the object detection result, so as to improve safety and traffic efficiency of the automatic driving vehicle, and reduce traffic accidents and traffic jams.
The embodiment of the invention also provides an object detection method based on vehicle-road cooperation, which comprises the following steps:
step 1, a cloud server generates service end fusion data according to scene detection data sent by an A-SMGCS system;
And 2, subscribing and acquiring corresponding service end fusion data stored on the cloud server by at least one automatic driving vehicle in an airport, acquiring local detection data by a vehicle-mounted sensor, acquiring current vehicle end fusion data, and updating the current vehicle end fusion data by utilizing the local detection data and the service end fusion data.
The preferred embodiment also provides an object detection method based on vehicle-road cooperation, as shown in fig. 4, comprising the following steps:
S1, acquiring state information, global identity ID, detection time and detection reliability of at least one object acquired by each scene device in an airport by an A-SMGCS system;
s2, the A-SMGCS system carries out combination processing on the state information of the corresponding object according to the acquired global identity ID, detection time and detection reliability;
S3, the A-SMGCS system correlates the global identity ID, the detection time, the detection reliability and the corresponding processing results to generate an optimized detection result of each object, and sends scene detection data containing all the optimized detection results to the cloud server;
s4, the cloud server generates service end fusion data according to scene detection data sent by the A-SMGCS system;
s5, at least one automatic driving vehicle in the airport subscribes to and acquires corresponding service end fusion data stored on the cloud server, and then acquires local detection data through a vehicle-mounted sensor and acquires current vehicle end fusion data;
S6, updating the current vehicle-end fusion data by using the local detection data and the server-end fusion data by using the automatic driving vehicles in the airport.
In a preferred embodiment, the scene equipment comprises any one or more of a scene monitoring radar, a multi-point positioning system and a broadcast automatic monitoring system, the signals can be used by a plurality of automatic driving vehicles together, and the signals are combined with a vehicle-mounted sensor arranged on the automatic driving vehicles to detect target objects in an airport, so that the perception blind area of vehicle-road coordination is reduced, and the overall construction cost is reduced.
In a preferred embodiment, the reliability of the vehicle-end fusion data after timely updating and fusion is high, the moving direction of a target object in an airport can be accurately predicted, and the automatic driving vehicle is subjected to path planning, path adjustment and navigation obstacle avoidance according to the prediction result, so that the safety and the passing efficiency of the automatic driving vehicle are improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The present invention is not limited to the details and embodiments described herein, and thus additional advantages and modifications may readily be made by those skilled in the art, without departing from the spirit and scope of the general concepts defined in the claims and the equivalents thereof, and the invention is not limited to the specific details, representative apparatus and illustrative examples shown and described herein.
Claims (8)
1. The object detection system based on the vehicle-road cooperation is characterized by comprising a cloud server and at least one automatic driving vehicle arranged in an airport, wherein the cloud server is connected with at least one scene device through an A-SMGCS system;
The cloud server comprises an acquisition module and a global data fusion module, wherein the acquisition module is used for acquiring scene detection data in an airport at a preset frequency; the global data fusion module is used for acquiring current service end fusion data, predicting the state of each object in the scene detection data at the current moment according to the optimized detection result of the corresponding object, and updating the current service end fusion data according to the prediction result; the optimized detection result comprises state information of at least one object, global Identity (ID), detection time and detection reliability;
Each autonomous vehicle is provided with at least one vehicle-mounted sensor, the autonomous vehicle comprises a first data acquisition module, a second data acquisition module and a local data fusion module,
The first data acquisition module is used for connecting a cloud server and acquiring service end fusion data generated by the cloud server according to scene detection data;
The second data acquisition module is used for connecting with the vehicle-mounted sensor and acquiring local detection data acquired by the vehicle-mounted sensor, wherein the local detection data comprises state information, local identity ID, detection time and detection reliability of at least one object;
The local data fusion module is used for acquiring current vehicle-end fusion data and updating the current vehicle-end fusion data by utilizing the local detection data and the service-end fusion data;
the global data fusion module specifically comprises:
The first prediction unit is used for generating first state prediction data of each object at the current moment according to the state information of the object at the corresponding detection time in the scene detection data, and forming a first state prediction data set;
the second prediction unit is used for acquiring the current service end fusion data, generating second state prediction data of each object in the current service end fusion data at the current moment, and forming a second state prediction data set;
The first matching unit is used for matching the first state prediction data set and the second state prediction data set, if the first state prediction data set and the second state prediction data set contain target objects with the same global identity ID, the first updating unit is executed, and if the first state prediction data set and the second state prediction data set contain target objects with the same global identity ID, the second updating unit or the third updating unit is executed;
the first updating unit is used for acquiring target state prediction data with higher detection reliability from the first state prediction data and the second state prediction data of the target object, associating the target state prediction data with the global identity ID of the target object, updating the target state prediction data to the second state prediction data set and generating new service end fusion data;
A second updating unit, configured to directly update, when it is determined according to the global identity ID that any object in the first state prediction data set does not exist in the second state prediction data set, the first state prediction data of the corresponding object to the second state prediction data set, so as to generate new server fusion data;
And a third updating unit, configured to, when it is determined according to the global identity ID that any object in the second state prediction data set does not exist in the first state prediction data set, and the number of cyclic matching times reaches a preset threshold, delete data corresponding to the object in the second state prediction data set, and generate new server fusion data.
2. The vehicle-road cooperation-based object detection system according to claim 1, wherein the a-SMGCS system is configured to obtain status information, a global identity ID, a detection time, and a detection reliability of at least one object collected by each scene device in an airport, combine the status information of the corresponding object according to the global identity ID, the detection time, and the detection reliability, correlate the global identity ID, the detection time, the detection reliability, and the corresponding processing results, generate an optimized detection result of each object, and send scene detection data including all the optimized detection results to the cloud server.
3. The vehicle-road collaboration-based object detection system of claim 2, wherein the scene device comprises any one or more of a field surveillance radar, a multi-point positioning system, and a broadcast automatic monitoring system.
4. The vehicle-road collaboration-based object detection system according to claim 2, wherein the cloud server further comprises an optimization module, the optimization module is configured to obtain raw data collected by the scene device, supplement and correct scene detection data generated by the a-SMGCS system according to the raw data, generate optimized scene detection data, and send the optimized scene detection data to the global data fusion module.
5. The vehicle-road-collaboration-based object detection system of any one of claims 1-4, wherein the autonomous vehicle further comprises a control module for path planning, path adjustment, and navigation obstacle avoidance for the own vehicle based on the object detection result.
6. The vehicle-road-collaboration-based object detection system of claim 5, wherein the local data fusion module specifically comprises:
The data processing unit is used for carrying out combination processing on the local detection data and the server fusion data and generating combination processing data with uniform format;
The third prediction unit is used for generating third state prediction data of each object at the current moment according to the state information of the object at the corresponding detection time in the combined processing data, and forming a third state prediction data set;
The fourth prediction unit is used for acquiring current vehicle-end fusion data, generating fourth state prediction data of each object in the current vehicle-end fusion data at the current moment, and forming a fourth state prediction data set;
The second matching unit is used for matching the third state prediction data set and the fourth state prediction data set, if the third state prediction data set and the fourth state prediction data set contain target objects with the same prediction positions, the fourth updating unit is executed, and if the third state prediction data set and the fourth state prediction data set contain target objects with the same prediction positions, the fifth updating unit or the sixth updating unit is executed;
A fourth updating unit, configured to determine whether the ID of the target object is the same, and when the ID is the same and only one ID is a global ID, acquire target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the target object, associate the target state prediction data with the global ID of the target object, and update the target state prediction data to the fourth state prediction data set;
And the third state prediction data of the corresponding object is directly updated to the fourth state prediction data set when the identity IDs are different and are global identity IDs;
And when the state information is other, acquiring target state prediction data with higher detection reliability from the third state prediction data and the fourth state prediction data of the corresponding object, associating the target state prediction data with the corresponding identity ID of the current vehicle-end fusion data, and updating the target state prediction data to the fourth state prediction data set;
A fifth updating unit, configured to directly update third state prediction data of a corresponding object to the fourth state prediction data set to form new vehicle-end fusion data when it is determined that any object in the third state prediction data set does not exist in the fourth state prediction data set according to the prediction position;
And a sixth updating unit, configured to, when it is determined according to the predicted position that any object in a fourth state prediction data set does not exist in the third state prediction data set, and the number of cycle matching times reaches a preset threshold, delete data corresponding to the object in the fourth state prediction data set, and form new vehicle end fusion data.
7. An object detection method based on vehicle-road cooperation, which uses the object detection system based on vehicle-road cooperation according to any one of claims 1 to 6, and is characterized by comprising the following steps:
step 1, a cloud server generates service end fusion data according to scene detection data sent by an A-SMGCS system;
And 2, subscribing and acquiring corresponding service end fusion data stored on the cloud server by at least one automatic driving vehicle in an airport, acquiring local detection data by a vehicle-mounted sensor, acquiring current vehicle end fusion data, and updating the current vehicle end fusion data by utilizing the local detection data and the service end fusion data.
8. The vehicle-road collaboration-based object detection method of claim 7, wherein the a-SMGCS system transmitting scene detection data to the cloud server specifically comprises the steps of: the A-SMGCS system acquires state information, global identity ID, detection time and detection reliability of at least one object acquired by each scene device in an airport, performs merging processing on the state information of the corresponding object according to the global identity ID, the detection time and the detection reliability, then associates the global identity ID, the detection time, the detection reliability and the corresponding processing result to generate an optimized detection result of each object, and sends scene detection data containing all the optimized detection results to the cloud server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210255205.9A CN114882717B (en) | 2022-03-16 | 2022-03-16 | Object detection system and method based on vehicle-road cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210255205.9A CN114882717B (en) | 2022-03-16 | 2022-03-16 | Object detection system and method based on vehicle-road cooperation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114882717A CN114882717A (en) | 2022-08-09 |
CN114882717B true CN114882717B (en) | 2024-05-17 |
Family
ID=82666627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210255205.9A Active CN114882717B (en) | 2022-03-16 | 2022-03-16 | Object detection system and method based on vehicle-road cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114882717B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104765018A (en) * | 2014-01-06 | 2015-07-08 | 福特全球技术公司 | Method and system for a head unit application host for a radar detector |
CN108630015A (en) * | 2018-05-21 | 2018-10-09 | 浙江吉利汽车研究院有限公司 | A kind of driving warning method, device and electronic equipment |
CN109429507A (en) * | 2017-06-19 | 2019-03-05 | 北京嘀嘀无限科技发展有限公司 | System and method for showing vehicle movement on map |
CN109817009A (en) * | 2018-12-31 | 2019-05-28 | 天合光能股份有限公司 | Method for acquiring dynamic traffic information required by unmanned driving |
CN110895147A (en) * | 2018-08-24 | 2020-03-20 | 百度(美国)有限责任公司 | Image data acquisition logic for an autonomous vehicle that captures image data with a camera |
WO2020104551A1 (en) * | 2018-11-22 | 2020-05-28 | Robert Bosch Gmbh | Object recognition using the sensor system of vehicles |
CN111540237A (en) * | 2020-05-19 | 2020-08-14 | 河北德冠隆电子科技有限公司 | Method for automatically generating vehicle safety driving guarantee scheme based on multi-data fusion |
CN212032368U (en) * | 2020-03-26 | 2020-11-27 | 中国民用航空总局第二研究所 | Safety early warning device for airport scene special operation vehicle |
CN112085960A (en) * | 2020-09-21 | 2020-12-15 | 北京百度网讯科技有限公司 | Vehicle-road cooperative information processing method, device and equipment and automatic driving vehicle |
WO2021009531A1 (en) * | 2019-07-12 | 2021-01-21 | 日産自動車株式会社 | Information processing device, information processing method, and program |
CN112419773A (en) * | 2020-11-19 | 2021-02-26 | 成都云科新能汽车技术有限公司 | Vehicle-road cooperative unmanned control system based on cloud control platform |
CN112550263A (en) * | 2019-09-24 | 2021-03-26 | 本田技研工业株式会社 | Information processing device, vehicle system, information processing method, and storage medium |
CN112802346A (en) * | 2020-12-28 | 2021-05-14 | 苏州易航远智智能科技有限公司 | Autonomous parking system and method based on cloud sharing and map fusion |
CN113168765A (en) * | 2018-12-06 | 2021-07-23 | 比特传感株式会社 | Traffic management server, and method and computer program for traffic management using the same |
CN113866758A (en) * | 2021-10-08 | 2021-12-31 | 深圳清航智行科技有限公司 | Scene monitoring method, system, device and readable storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9759812B2 (en) * | 2014-10-02 | 2017-09-12 | Trimble Inc. | System and methods for intersection positioning |
US10838418B2 (en) * | 2019-01-31 | 2020-11-17 | StradVision, Inc. | Method for providing autonomous driving service platform to be used for supporting autonomous driving of vehicles by using competitive computing and information fusion, and server using the same |
-
2022
- 2022-03-16 CN CN202210255205.9A patent/CN114882717B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104765018A (en) * | 2014-01-06 | 2015-07-08 | 福特全球技术公司 | Method and system for a head unit application host for a radar detector |
CN109429507A (en) * | 2017-06-19 | 2019-03-05 | 北京嘀嘀无限科技发展有限公司 | System and method for showing vehicle movement on map |
CN108630015A (en) * | 2018-05-21 | 2018-10-09 | 浙江吉利汽车研究院有限公司 | A kind of driving warning method, device and electronic equipment |
CN110895147A (en) * | 2018-08-24 | 2020-03-20 | 百度(美国)有限责任公司 | Image data acquisition logic for an autonomous vehicle that captures image data with a camera |
WO2020104551A1 (en) * | 2018-11-22 | 2020-05-28 | Robert Bosch Gmbh | Object recognition using the sensor system of vehicles |
CN113168765A (en) * | 2018-12-06 | 2021-07-23 | 比特传感株式会社 | Traffic management server, and method and computer program for traffic management using the same |
CN109817009A (en) * | 2018-12-31 | 2019-05-28 | 天合光能股份有限公司 | Method for acquiring dynamic traffic information required by unmanned driving |
WO2021009531A1 (en) * | 2019-07-12 | 2021-01-21 | 日産自動車株式会社 | Information processing device, information processing method, and program |
CN112550263A (en) * | 2019-09-24 | 2021-03-26 | 本田技研工业株式会社 | Information processing device, vehicle system, information processing method, and storage medium |
CN212032368U (en) * | 2020-03-26 | 2020-11-27 | 中国民用航空总局第二研究所 | Safety early warning device for airport scene special operation vehicle |
CN111540237A (en) * | 2020-05-19 | 2020-08-14 | 河北德冠隆电子科技有限公司 | Method for automatically generating vehicle safety driving guarantee scheme based on multi-data fusion |
CN112085960A (en) * | 2020-09-21 | 2020-12-15 | 北京百度网讯科技有限公司 | Vehicle-road cooperative information processing method, device and equipment and automatic driving vehicle |
CN112419773A (en) * | 2020-11-19 | 2021-02-26 | 成都云科新能汽车技术有限公司 | Vehicle-road cooperative unmanned control system based on cloud control platform |
CN112802346A (en) * | 2020-12-28 | 2021-05-14 | 苏州易航远智智能科技有限公司 | Autonomous parking system and method based on cloud sharing and map fusion |
CN113866758A (en) * | 2021-10-08 | 2021-12-31 | 深圳清航智行科技有限公司 | Scene monitoring method, system, device and readable storage medium |
Non-Patent Citations (1)
Title |
---|
基于优化YOLO方法机场跑道目标检测;蔡成涛;吴科君;严勇杰;;指挥信息系统与技术;20180702(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114882717A (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220032955A1 (en) | Vehicle control device and vehicle control method | |
EP0744630A2 (en) | Airport surface monitoring and runway incursion warning system | |
CN104537896B (en) | A kind of complete silent spatial domain monitoring and avoidance system and spatial domain monitoring and preventing collision method | |
US11055933B2 (en) | Method for operating a communication network comprising a plurality of motor vehicles, and motor vehicle | |
CN113866758B (en) | Scene monitoring method, system, device and readable storage medium | |
CN112382131B (en) | Airport scene safety collision avoidance early warning system and method | |
CN104269053A (en) | Intelligent traffic system and method and intelligent automobile | |
CN103699713A (en) | Collision detection method for airplane formation and application of method | |
US11600178B2 (en) | Roadway information detection systems consists of sensors on automonous vehicles and devices for the road | |
CN111477010A (en) | Device for intersection holographic sensing and control method thereof | |
US20230252888A1 (en) | Systems and Methods for Interactive Vehicle Transport Networks | |
EP4148385A1 (en) | Vehicle navigation positioning method and apparatus, and base station, system and readable storage medium | |
CN112534297A (en) | Information processing apparatus, information processing method, computer program, information processing system, and mobile apparatus | |
CN112017482A (en) | Method and system for avoiding collision between aircraft and other flying objects | |
US20220065982A1 (en) | Traffic management system and an unmanned aerial vehicle compatible with such a system | |
US20210158128A1 (en) | Method and device for determining trajectories of mobile elements | |
KR102627478B1 (en) | Vehicle control system, non-image sensor module and sensing data processing method | |
CN114882717B (en) | Object detection system and method based on vehicle-road cooperation | |
CN114655260A (en) | Control system of unmanned tourist coach | |
CN112083420B (en) | Unmanned aerial vehicle collision avoidance method and device and unmanned aerial vehicle | |
CN116884277A (en) | Configurable low-altitude environment sensing and anti-collision system design method | |
US20240079795A1 (en) | Integrated modular antenna system | |
WO2023175618A1 (en) | Cloud-based sensing and control system using networked sensors for moving or stationary platforms | |
US20230103178A1 (en) | Systems and methods for onboard analysis of sensor data for sensor fusion | |
CN114739381B (en) | Airport vehicle navigation system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |