CN114926983A - Traffic accident emergency oriented multi-scale comprehensive sensing method - Google Patents
Traffic accident emergency oriented multi-scale comprehensive sensing method Download PDFInfo
- Publication number
- CN114926983A CN114926983A CN202210532340.3A CN202210532340A CN114926983A CN 114926983 A CN114926983 A CN 114926983A CN 202210532340 A CN202210532340 A CN 202210532340A CN 114926983 A CN114926983 A CN 114926983A
- Authority
- CN
- China
- Prior art keywords
- accident
- traffic
- traffic accident
- scene
- mobile measuring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010039203 Road traffic accident Diseases 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000007613 environmental effect Effects 0.000 claims abstract description 21
- 230000008447 perception Effects 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000011835 investigation Methods 0.000 claims description 3
- 239000007787 solid Substances 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 5
- RAHZWNYVWXNFOC-UHFFFAOYSA-N Sulphur dioxide Chemical compound O=S=O RAHZWNYVWXNFOC-UHFFFAOYSA-N 0.000 description 4
- 239000008186 active pharmaceutical agent Substances 0.000 description 4
- 230000006378 damage Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- MGWGWNFMUOTEHG-UHFFFAOYSA-N 4-(3,5-dimethylphenyl)-1,3-thiazol-2-amine Chemical compound CC1=CC(C)=CC(C=2N=C(N)SC=2)=C1 MGWGWNFMUOTEHG-UHFFFAOYSA-N 0.000 description 1
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229910002092 carbon dioxide Inorganic materials 0.000 description 1
- 239000001569 carbon dioxide Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000000383 hazardous chemical Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- JCXJVPUVTGWSNB-UHFFFAOYSA-N nitrogen dioxide Inorganic materials O=[N]=O JCXJVPUVTGWSNB-UHFFFAOYSA-N 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0141—Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Multimedia (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a traffic accident emergency oriented multi-scale comprehensive sensing method, which adopts real-time video data of a traffic video camera and utilizes a target detection algorithm to extract possible road sudden accident information; automatically planning a maneuvering unmanned aerial vehicle platform based on the accident space-time position information, flying to the vicinity of the scene for reconnaissance, and extracting the accident range and the scene information; automatically planning a mobile measuring vehicle platform on the ground based on the field information, and carrying out close-range video observation; monitoring field environmental parameters by utilizing an atmospheric environment sensor carried by a mobile measuring vehicle; based on the traffic video and the environmental monitoring data, the server provides the traffic accident scene perception result data for reference of follow-up road rescue decision. The method realizes the comprehensive perception of the traffic accidents among a plurality of air-ground cross-scale platforms, obtains the perception data results of a plurality of dimensions of a traffic accident scene at one time, and supports the scientific emergency decision of the traffic accidents.
Description
Technical Field
The invention relates to the field of traffic supervision, in particular to a traffic accident emergency oriented multi-scale comprehensive perception method.
Background
Urban traffic has always been the key field of smart city construction. However, the updating speed of urban road infrastructure is always unable to keep up with the increasing speed of road motor vehicles, and the traffic flow of urban cities is increased sharply at rush hour and key intersections, and a series of traffic problems are caused, wherein the most concerned is the sudden traffic accident problem. Traffic accidents often cause damage to vehicles, injury to people and even death, and seriously affect road traffic efficiency. Therefore, the real-time, comprehensive and accurate perception is carried out aiming at the emergency scene of frequent traffic accidents, and the method is a very urgent practical requirement and technical problem.
The traditional traffic accident emergency perception mainly depends on a manual alarming means, and adopts a manual scheduling mode, so that the problem of time lag is more prominent. In recent years, the technical means of traffic cameras are gradually adopted to detect traffic accidents in key road sections in real time and track linkage of multiple cameras, so that great progress is made in timeliness. Meanwhile, novel sensing platforms such as unmanned aerial vehicles, mobile measuring vehicles and even satellite remote sensing are gradually applied to traffic monitoring analysis. However, from the perspective of the holistic theory and the system theory, it can be seen that various platforms (such as a camera, an unmanned aerial vehicle and a mobile measuring vehicle) currently used for emergency perception of traffic accidents often operate independently, and a cooperation method between an aerial platform and a ground platform is lacked. When the same traffic accident is handled, the problem is that manual scheduling, planning and analysis are needed, time and labor are consumed, and instant response cannot be achieved. Therefore, a multi-scale perception method for mutual cooperation of an air platform and a ground platform is needed to be constructed at present, complementary advantages of various perception platforms are comprehensively utilized, and instant, comprehensive and accurate perception is provided for emergency of traffic accidents.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a traffic accident emergency oriented multi-scale comprehensive perception method.
The invention provides a traffic accident emergency oriented multi-scale comprehensive sensing method, which specifically comprises the following steps:
step 1, extracting possible road traffic accident spatiotemporal position information by adopting real-time video data of a traffic video camera deployed at a fixed point position and utilizing a target detection algorithm;
step 2, automatically planning a maneuvering unmanned aerial vehicle platform based on the traffic accident space-time position information, flying to the vicinity of a scene to carry out specific investigation on the traffic accident, and extracting an accident range and scene information;
step 3, automatically planning a mobile measuring vehicle platform on the ground based on the field information, and moving to the field to carry out close-range video observation;
step 4, utilizing an atmospheric environment sensor carried by the mobile measuring vehicle to monitor environmental parameters of the accident site in real time to obtain environmental parameters;
and 5, analyzing the multi-source data by using a server based on the multi-source data formed by the environmental parameters, the traffic accident space-time position information and the field information, and providing analysis result data for reference of follow-up road rescue decision making.
Further, step 2 specifically comprises:
step 201, the server analyzes and obtains three flight parameters of a sending waypoint task execution control instruction, longitude and latitude coordinates, a flight height and a flight speed according to the traffic accident space-time position information; the server sends the parameters to a mobile terminal platform carrying a flight control program, and the mobile terminal forwards the instruction and the parameters to the unmanned aerial vehicle controller;
step 202, taking accident longitude and latitude coordinates as a center, shooting three tripod head pitch angles of 30 degrees, 45 degrees and 60 degrees around a traffic accident scene by the unmanned aerial vehicle, and calculating and planning three square routes;
step 203, continuously photographing according to a set time interval and storing the photographed images into a body memory card when the unmanned aerial vehicle carries out flight line task flight, recording coordinate points, elevations and camera parameters in images, and simultaneously pushing accident scene video streams to a server in real time;
and step 204, extracting the minimum external rectangular frame of the vehicle by adopting a video feature extraction algorithm based on the video stream of the traffic accident scene shot by the unmanned aerial vehicle, judging and calculating the overlapping area of the two rectangular frames, and displaying the overlapping part by using a green frame if the two rectangular frames are overlapped.
The step 3 specifically comprises the following steps:
301, obtaining a planned running track of the mobile measuring vehicle through a map route planning API by utilizing accident longitude and latitude coordinates and adopting a riding route planning method;
step 302, acquiring the path coordinate point planned in the step 301 by adopting a coordinate and protocol conversion method, and acquiring a path file for the operation of the mobile measuring vehicle;
step 303, automatically compiling the path file into a mobile measuring vehicle motion control message, uploading the compiled control message to the mobile measuring vehicle, sending an unlocking instruction, and carrying out ground observation on the mobile measuring vehicle in front of a target observation area;
304, the camera module on the mobile measuring vehicle takes pictures according to a set time interval and stores the pictures in the solid state disk of the body; the shooting module records by adopting a set resolution specification and pushes the video stream of the accident scene to a server by a set code rate.
Compared with the prior art, the invention has the beneficial effects that: the joint perception of the traffic accidents among the multiple air-ground cross-scale platforms is realized. The multi-factor comprehensive observation requirement of the accident emergency site is met, and therefore the method has prominent priority and application effects.
Drawings
FIG. 1 is an architecture diagram of the air-ground cooperative multi-scale perception method of the present invention;
FIG. 2 is a flowchart illustrating step 2 in an embodiment of the present invention;
FIG. 3 is a communication diagram between the server and the mobile terminal in step 201 according to the embodiment of the present invention
FIG. 4 is a graph of the results of the flight plan of step 202 in the embodiment of the present invention
FIG. 5 is a diagram of the result of the algorithm of step 204 in an embodiment of the present invention;
FIG. 6 is a flowchart of a method of step 3 in an embodiment of the present invention;
fig. 7 is a flowchart of the path planning of the mobile measuring vehicle in step 3 according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of the present invention; the system of the invention specifically comprises:
step 1, extracting possible road traffic accident spatiotemporal position information by adopting real-time video data of a traffic video camera deployed at a fixed point position and utilizing a target detection algorithm;
it should be noted that, based on the fixed-position traffic camera video stream, the video stream is read at the server, vehicle target detection is performed on the video image, the vehicle is surrounded and displayed by the circumscribed rectangle, and the confidence of the corresponding target is attached.
The inter-frame difference method is adopted to estimate the speed of the vehicle within a period of time, if the speed is always 0, an accident situation may occur, and information such as the positioning coordinate (for example, the east longitude 114.36 degrees and the north latitude 30.27 degrees) of the camera, accident reminding and the like needs to be fed back to the server.
Step 2, automatically planning a maneuvering unmanned aerial vehicle platform based on the traffic accident space-time position information, flying to the vicinity of a scene to carry out specific investigation on the traffic accident, and extracting an accident range and scene information;
it should be noted that step 2 specifically includes:
step 201, the server analyzes and obtains three flight parameters of a sending waypoint task execution control instruction, longitude and latitude coordinates, a flight height and a flight speed according to the traffic accident space-time position information; the server sends the parameters to a mobile terminal platform carrying a flight control program, and the mobile terminal forwards the instruction and the parameters to the unmanned aerial vehicle controller;
step 202, taking accident longitude and latitude coordinates as a center, shooting three tripod head pitch angles of 30 degrees, 45 degrees and 60 degrees around a traffic accident scene by the unmanned aerial vehicle, and calculating and planning three square routes;
step 203, continuously photographing according to a set time interval and storing the photographed images into a body memory card when the unmanned aerial vehicle carries out flight line task flight, recording coordinate points, elevations and camera parameters in images, and simultaneously pushing accident scene video streams to a server in real time;
and step 204, extracting the minimum external rectangular frame of the vehicle by adopting a video feature extraction algorithm based on the video stream of the traffic accident scene shot by the unmanned aerial vehicle, judging and calculating the overlapping area of the two rectangular frames, and displaying the overlapping part by using a green frame if the two rectangular frames are overlapped.
As an example, please refer to fig. 2.
The method comprises the following steps that after acquiring the time-space position information of an accident site, a server sends an instruction and parameters to an Android terminal flight control program, the flight control program automatically plans a flight route of the unmanned aerial vehicle according to the parameters, controls the unmanned aerial vehicle to fly to the position near the site to photograph the accident site and send a video image back to the server, extracts accident range and site information according to a feature extraction algorithm and a return video, and specifically comprises the following substeps:
step 2.1, as shown in fig. 3, the server communicates with an Android mobile terminal loaded with a flight control program through a socket, the socket monitors a port for a long time, waits for the mobile terminal to establish a connection with the server for a long time, the server sends a control instruction of "performing a waypoint task" of the unmanned aerial vehicle and three parameters of which the height h is 120m, the speed v is 10m/s, the longitude and latitude lng is 114.36 and the lat is 30.27 to the flight control program, and the flight control program receives and analyzes the instruction and the parameters and sends the control instruction to the controller of the unmanned aerial vehicle;
referring to fig. 4, in step 2.2, after receiving the command and the parameters, the flight control program uses a flight planning algorithm to convert 0.001141 ° -1 m longitude coordinates and 0.000899 ° -1 m latitude coordinates, and uses them(wherein alpha is 60, 45, 30; h is the flying height; s is the distance from the accident center point) 4 vertexes of the square under the multi-pitch angle are respectively calculated, and three square routes which take the longitude and latitude coordinates of the accident point as the center and take the tripod head pitch angles of 30 degrees, 45 degrees and 60 degrees around the traffic accident scene as rules are planned according to the anticlockwise sequence, as shown in figure 3; the unmanned aerial vehicle firstly vertically shoots a picture downwards at an accident center point, then executes a waypoint task according to a route, and returns to a return waypoint after the task is completed;
step 2.3, the unmanned aerial vehicle always faces the accident site in the course direction while flying according to the planned route, and takes a picture of the accident site in a surrounding manner at the speed of 2 seconds/piece, the picture is stored in a body memory card in a JPEG format, and simultaneously, an RTMP protocol is adopted to push and return the video of the accident site to a server and a mobile measuring vehicle in an H.264 coding format in real time;
referring to fig. 5, step 2.4, the drone transmits the video stream of the scene accident to the server, and the server processes the transmitted video image. And (3) performing image processing on the video image, surrounding the vehicle by using a circumscribed rectangle, and attaching confidence degrees of corresponding targets (the confidence degrees of the two vehicles are 0.61 and 0.84 respectively). Meanwhile, whether the two vehicles collide is judged by calculating the intersection-parallel ratio of the two rectangular frames. If the intersection ratio is larger than 0, a collision condition may exist, and the overlapping part of the two rectangular frames is marked to be regarded as a possible collision area of the two vehicles.
Step 3, automatically planning a ground mobile measuring vehicle platform based on the field information, and moving to the field to carry out close-range video observation;
referring to fig. 6, it should be noted that step 3 specifically includes:
step 301, obtaining a planned running track of the mobile measuring vehicle by utilizing accident longitude and latitude coordinates and adopting a riding route planning method through a map route planning API;
step 302, acquiring the path coordinate points planned in the step 301 by adopting a coordinate and protocol conversion method, and acquiring a path file which can be used for the operation of the mobile measuring vehicle;
step 303, automatically compiling the path file into a mobile measuring vehicle motion control message, uploading the compiled control message to the mobile measuring vehicle, sending an unlocking instruction, and carrying out ground observation on the mobile measuring vehicle in front of a target observation area;
step 304, the camera module on the mobile measuring vehicle takes pictures according to a set time interval and stores the pictures in the solid state disk of the body; the shooting module records by adopting a set resolution specification and pushes the video stream of the accident scene to a server by a set code rate.
Referring to fig. 7, as an embodiment, the processing procedure of step 3 of the present application is as follows:
step 3.1, the server inputs two destination coordinate parameters of longitude and latitude long 114.36 and lat 30.27 into a high-grade map riding route planning API (amap.ring), acquires a current GNSS position long 114.33 and lat 30.30 as starting points, and requests a fastest route.
And finally, the API returns the Number (Count Number) of the riding navigation road sections of the route for riding and the coordinate set Array (Array. < LngLat >) of the navigation road sections and then stores the Array in the memory.
And 3.2, calling the coordinate set Array (Array. < LngLat >) of the navigation road section stored in the memory in the step 3.1 by the server, and converting the coordinate set Array into a KML format which can be used for the operation of the mobile measuring vehicle in a protocol conversion mode.
And 3.3, calling the KML file containing the navigation road section coordinate set generated in the step 3.2 by the server, and importing the KML into a Mallink protocol conversion program to obtain a Mallink instruction message for automatic driving of the mobile measuring vehicle, such as 'MAVLINK _ MSG _ ID _ GPS _ RAW _ INT-time _ use: 0lat:30.2720735806lon:114.3367281alt:0eph:0epv:65535vel:0cog:0fix _ type:0satellites _ visual: 0'. And submitting an unlocking instruction of the mobile measuring vehicle, and switching the operation mode from Manual to Auto, and then moving the mobile measuring vehicle to a target observation area according to the message instruction.
Step 3.4, the camera module on the mobile measuring vehicle takes pictures according to the set time interval and stores the pictures in the bodyA state hard disk; the shooting module adopts the set resolution specification to record and pushes the stream to the server through the RTSP. The server side and an Android mobile terminal loaded with a measuring robot (radio frequency direct connection) are communicated with each other through a socket, and the socket of the server side monitors a port for a long time and waits for the mobile terminal to be connected with a server for a long time; or the server and the vehicle-mounted control system provided with the measuring robot (connected by a mobile data network) are communicated with each other through a socket. And the server socket monitors the port for a long time to obtain the running state information of the mobile measuring vehicle. Atmospheric temperature, PM2.5 and SO on a mobile measurement vehicle 2 And the environment sensors store the observation data in a microcomputer of the mobile measuring vehicle, and push the acquired data to the server side in a UDP (user datagram protocol) mode.
Step 4, utilizing an atmospheric environment sensor carried by the mobile measuring vehicle to monitor environmental parameters of the accident site in real time to obtain environmental parameters;
and 4.1, carrying environmental element sensing equipment such as carbon dioxide, sulfur dioxide, nitrogen dioxide, air temperature and humidity, noise, illumination intensity and the like on the mobile measuring vehicle. The position and the starting of the environment sensing equipment are controlled by a carrier moving measuring vehicle. When the mobile measuring vehicle receives the execution task, the sensor power supply is started and preheated, and meanwhile, the vehicle goes to the accident site along with the mobile measuring vehicle;
and 4.2, observing the accident scene by the mobile measuring vehicle according to the predicted track, and acquiring current time information and space information (longitude and latitude) through a GNSS module arranged on the mobile measuring vehicle. These spatiotemporal position information are acquired once per second and by means of this information a database header is established in the computer system of the mobile measuring vehicle.
And 4.3, collecting various environmental elements by an environmental element sensor arranged on the mobile measuring vehicle at a specific time frequency (such as once per second), and carrying Modbus messages through an RS485 bus to transmit the Modbus messages to a computer system positioned on the mobile measuring vehicle. These collected spatiotemporal location information and environmental parameter information are recorded in a database table.
And 4.4, synchronizing the database with the service center by the computer system of the mobile measuring vehicle at a specific time frequency (such as every 15 seconds), wherein the database is transmitted by the 4G module and the 5G module in a TCP/IP mode.
And 5, analyzing the multi-source data by using a server based on the multi-source data formed by the environmental parameters, the traffic accident space-time position information and the field information, and providing analysis result data for reference of follow-up road rescue decision making.
Take the case of a car colliding with a truck loaded with hazardous chemicals on a traffic route. The method comprises the steps that a car is slightly damaged, a truck turns on one side, at the moment, a road camera sends a collision picture of the two cars back to a server, a collision detection algorithm in the server identifies a traffic image and then judges that a traffic accident event occurs, and according to the position information of the camera and the height of a surrounding shielding object, longitude and latitude coordinates of an accident site (such as the east longitude 114.36 degrees and the north latitude 30.27 degrees), the flight height of an unmanned aerial vehicle (such as 20 meters) and the flight speed of the unmanned aerial vehicle (such as 3 meters per second) are sent to a flight control system, and the longitude and latitude coordinates of the accident site are sent to an unmanned measuring vehicle control system. After the visible light unmanned aerial vehicle arrives at an accident scene in a flying manner, three pan-tilt pitch angles of 60 degrees, 45 degrees and 30 degrees are carried out on an accident center point for surrounding flying, meanwhile, a scene picture is taken, and a video stream is pushed back to a control center and a mobile measuring vehicle in real time; the control center schedules the thermal infrared unmanned aerial vehicle to detect infrared characteristics (such as abnormally high or low temperature) of dangerous chemical leakage around an accident site according to the on-site accident condition and the on-site space condition returned by the unmanned aerial vehicle, and judges the position of a person to be returned to the control center in real time according to temperature display. When dangerous goods are found to be leaked, the ground measuring vehicle is planned to arrive at an accident site, and real-time accident site environmental parameter monitoring (such as detection of sulfur dioxide concentration rising or low temperature) is carried out by utilizing an atmospheric environment sensor carried by the mobile measuring vehicle, so that environmental parameters are obtained and transmitted back to the control center. According to the environmental parameter information and expert experience, the site danger condition is judged, the obtained personnel position information is matched to assign a measuring vehicle to enter a narrow space to search personnel, a specific condition video around the personnel is returned, and accurate rescue measures are assigned. According to the atmospheric environment parameters and the trapped condition of the personnel, the professional is dispatched to the site for rescue in time.
The invention has the following effects:
(1) realize the comprehensive perception of traffic accidents among a plurality of air-ground cross-scale platforms
The traditional traffic accident sensing method mainly depends on a single sensing platform, such as a video network, an unmanned aerial vehicle or a mobile measuring vehicle, and the acquired data structure is single, the information is limited, and the traffic accident scene situation is difficult to be comprehensively reflected. Even though some current technical methods for sensing traffic accidents by using a plurality of sensing platforms are available, the manual scheduling mode is mainly adopted, and the timeliness is insufficient. The method provided by the invention introduces the ideas of a system theory and a cooperation theory, adopts a networking method aiming at respective advantages of different platforms such as a video camera, an unmanned aerial vehicle and a mobile measuring vehicle, realizes deep combination among a plurality of air-ground cross-scale platforms, and acquires information from a plurality of aspects such as traffic accident detection, traffic accident sites and traffic accident environments. Therefore, the method has remarkable advantages and positive effects in three aspects of sensing platform types, sensing cooperative modes, sensing information results and the like.
(2) Realizes the joint perception of the video and the environmental parameters aiming at the scene of the traffic accident
The traditional traffic accident perception method mainly focuses on three aspects of vehicle damage, road damage, casualties and the like caused by traffic accidents. Although the main information of the traffic accident can be acquired, with the gradual implementation of innovative ideas such as green traffic and intelligent traffic, the perception capability cannot meet the requirement. Particularly, after a serious traffic accident happens to some dangerous chemical transport vehicles, the environmental pollution on site can be caused, and a series of long-term and hidden consequences can be caused. Therefore, at present, it is urgently needed to simultaneously sense video and environmental parameters of a traffic accident scene, grasp the conditions of vehicles, roads and personnel on the scene, and grasp regional atmospheric environmental parameters so as to implement traffic emergency rescue more efficiently. The invention provides a method for simultaneously carrying a video camera and an environment monitoring sensor on a mobile vehicle-mounted platform and transmitting data in real time, so that the requirement of multi-factor comprehensive observation on an accident emergency site is met, and the method has outstanding priority and application effects.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (3)
1. A traffic accident emergency oriented multi-scale comprehensive perception method is characterized by comprising the following steps: the method comprises the following steps:
step 1, extracting possible road traffic accident spatiotemporal position information by adopting real-time video data of a traffic video camera deployed at a fixed point position and utilizing a target detection algorithm;
step 2, automatically planning a maneuvering unmanned aerial vehicle platform based on the traffic accident space-time position information, flying to the vicinity of a scene to carry out specific investigation on the traffic accident, and extracting an accident range and scene information;
step 3, automatically planning a mobile measuring vehicle platform on the ground based on the field information, and moving to the field to carry out close-range video observation;
step 4, utilizing an atmospheric environment sensor carried by the mobile measuring vehicle to monitor environmental parameters of the accident site in real time to obtain environmental parameters;
and 5, analyzing by using a server based on a data set formed by the environmental parameters, the traffic accident space-time position information and the field information, and providing analysis result data for reference of follow-up road rescue decision making.
2. The multi-scale comprehensive perception method oriented to the traffic accident emergency as claimed in claim 1, wherein: the step 2 specifically comprises the following steps:
step 201, the server analyzes and obtains three flight parameters of a sending waypoint task execution control instruction, longitude and latitude coordinates, a flight height and a flight speed according to the traffic accident space-time position information; the server sends the parameters to a mobile terminal platform carrying a flight control program, and the mobile terminal forwards the instruction and the parameters to the unmanned aerial vehicle controller;
step 202, taking accident longitude and latitude coordinates as a center, shooting three tripod head pitch angles of 30 degrees, 45 degrees and 60 degrees around a traffic accident scene by the unmanned aerial vehicle, and calculating and planning three square routes;
step 203, continuously photographing according to a set time interval and storing the photographed images into a body memory card when the unmanned aerial vehicle carries out flight line task flight, recording coordinate points, elevations and camera parameters in images, and simultaneously pushing accident scene video streams to a server in real time;
and 204, extracting the minimum external rectangular frame of the vehicle by adopting a video feature extraction algorithm based on the traffic accident scene video stream shot by the unmanned aerial vehicle, judging and calculating the overlapping area of the two rectangular frames, and displaying the overlapping part by using a green frame if the two rectangular frames are overlapped.
3. The traffic accident emergency oriented multi-scale comprehensive perception method as claimed in claim 1, wherein: the step 3 specifically comprises the following steps:
301, obtaining a planned running track of the mobile measuring vehicle through a map route planning API by utilizing accident longitude and latitude coordinates and adopting a riding route planning method;
step 302, acquiring the path coordinate points planned in the step 301 by adopting a coordinate and protocol conversion method, and acquiring a path file which can be used for the operation of the mobile measuring vehicle;
step 303, automatically compiling the path file into a mobile measuring vehicle motion control message, uploading the compiled control message to the mobile measuring vehicle, sending an unlocking instruction, and carrying out ground observation on the mobile measuring vehicle in front of a target observation area;
304, the camera module on the mobile measuring vehicle takes pictures according to a set time interval and stores the pictures in the solid state disk of the body; the camera module records by adopting a set resolution specification, and pushes the video stream of the accident scene to the server by a set code rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532340.3A CN114926983A (en) | 2022-05-11 | 2022-05-11 | Traffic accident emergency oriented multi-scale comprehensive sensing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210532340.3A CN114926983A (en) | 2022-05-11 | 2022-05-11 | Traffic accident emergency oriented multi-scale comprehensive sensing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114926983A true CN114926983A (en) | 2022-08-19 |
Family
ID=82807845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210532340.3A Pending CN114926983A (en) | 2022-05-11 | 2022-05-11 | Traffic accident emergency oriented multi-scale comprehensive sensing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926983A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115083167A (en) * | 2022-08-22 | 2022-09-20 | 深圳市城市公共安全技术研究院有限公司 | Early warning method, system, terminal device and medium for vehicle leakage accident |
CN115439767A (en) * | 2022-11-08 | 2022-12-06 | 深圳互酷科技有限公司 | Unmanned aerial vehicle accident evidence obtaining method and system, terminal equipment and medium |
CN117391911A (en) * | 2023-12-08 | 2024-01-12 | 日照先森网络科技股份有限公司 | Smart city comprehensive management method and system |
CN118379884A (en) * | 2024-06-25 | 2024-07-23 | 高精特(成都)大数据科技有限公司 | Big data fusion all-in-one machine and data processing method |
CN118379692A (en) * | 2024-04-24 | 2024-07-23 | 山东理工职业学院 | Road monitoring and identifying system and method based on computer vision |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106767706A (en) * | 2016-12-09 | 2017-05-31 | 中山大学 | A kind of unmanned plane reconnoitres the Aerial Images acquisition method and system of the scene of a traffic accident |
CN108171988A (en) * | 2018-01-02 | 2018-06-15 | 湘潭大学 | A kind of highway accident investigation system and method |
CN110047269A (en) * | 2019-04-08 | 2019-07-23 | 王飞跃 | Accident support system, accident support method, electronic device and storage medium |
CN112200131A (en) * | 2020-10-28 | 2021-01-08 | 鹏城实验室 | Vision-based vehicle collision detection method, intelligent terminal and storage medium |
CN112712691A (en) * | 2019-10-24 | 2021-04-27 | 广州汽车集团股份有限公司 | Intelligent traffic accident processing method and device |
-
2022
- 2022-05-11 CN CN202210532340.3A patent/CN114926983A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106767706A (en) * | 2016-12-09 | 2017-05-31 | 中山大学 | A kind of unmanned plane reconnoitres the Aerial Images acquisition method and system of the scene of a traffic accident |
CN108171988A (en) * | 2018-01-02 | 2018-06-15 | 湘潭大学 | A kind of highway accident investigation system and method |
CN110047269A (en) * | 2019-04-08 | 2019-07-23 | 王飞跃 | Accident support system, accident support method, electronic device and storage medium |
CN112712691A (en) * | 2019-10-24 | 2021-04-27 | 广州汽车集团股份有限公司 | Intelligent traffic accident processing method and device |
CN112200131A (en) * | 2020-10-28 | 2021-01-08 | 鹏城实验室 | Vision-based vehicle collision detection method, intelligent terminal and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115083167A (en) * | 2022-08-22 | 2022-09-20 | 深圳市城市公共安全技术研究院有限公司 | Early warning method, system, terminal device and medium for vehicle leakage accident |
CN115439767A (en) * | 2022-11-08 | 2022-12-06 | 深圳互酷科技有限公司 | Unmanned aerial vehicle accident evidence obtaining method and system, terminal equipment and medium |
CN117391911A (en) * | 2023-12-08 | 2024-01-12 | 日照先森网络科技股份有限公司 | Smart city comprehensive management method and system |
CN117391911B (en) * | 2023-12-08 | 2024-02-27 | 日照先森网络科技股份有限公司 | Smart city comprehensive management method and system |
CN118379692A (en) * | 2024-04-24 | 2024-07-23 | 山东理工职业学院 | Road monitoring and identifying system and method based on computer vision |
CN118379884A (en) * | 2024-06-25 | 2024-07-23 | 高精特(成都)大数据科技有限公司 | Big data fusion all-in-one machine and data processing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114926983A (en) | Traffic accident emergency oriented multi-scale comprehensive sensing method | |
Kanistras et al. | A survey of unmanned aerial vehicles (UAVs) for traffic monitoring | |
US10824863B2 (en) | Systems for searching for persons using autonomous vehicles | |
CN106454209B (en) | The fast anti-data link system of unmanned plane emergency and method based on TEMPORAL-SPATIAL INFORMATION FUSION | |
CN111479086A (en) | Intelligent real-time command management system for unmanned aerial vehicle | |
CN202481315U (en) | Multifunctional unmanned aerial vehicle (UAV) system for environment emergency monitoring | |
US11320829B2 (en) | Battery powered artificial intelligence autonomous patrol vehicle | |
CN116308944B (en) | Emergency rescue-oriented digital battlefield actual combat control platform and architecture | |
US11840262B2 (en) | Production factory unmanned transfer system and method | |
CN107664500B (en) | garage vehicle positioning and navigation method based on image feature recognition | |
Ferri et al. | DustCart, an autonomous robot for door-to-door garbage collection: From DustBot project to the experimentation in the small town of Peccioli | |
CN107367262A (en) | Positioning mapping in real time shows interconnection type control method to a kind of unmanned plane at a distance | |
CN103941746A (en) | System and method for processing unmanned aerial vehicle polling image | |
CN111323789B (en) | Ground morphology scanning device and method based on unmanned aerial vehicle and solid-state radar | |
CN113077561A (en) | Intelligent inspection system for unmanned aerial vehicle | |
CN110647170A (en) | Navigation mark inspection device and method based on unmanned aerial vehicle | |
CN107521678A (en) | The UAS and its method for positioning and capturing for nuclear radiation radioactive source | |
CN110673643A (en) | Intelligent environment-friendly monitoring system and method for unmanned aerial vehicle | |
CN114023035A (en) | All-weather full-automatic early detection system and detection method for forest fire | |
CN210835732U (en) | Beacon inspection device based on unmanned aerial vehicle | |
CN113268075A (en) | Unmanned aerial vehicle control method and system | |
KR102426943B1 (en) | Air pollutants ouput and fine dust monitoring Smart CCTV system of road vehicle | |
CN112286228A (en) | Unmanned aerial vehicle three-dimensional visual obstacle avoidance method and system | |
CN116030591A (en) | Intelligent inspection alarm system and method for mine external fire disaster based on Internet of things | |
JP3985371B2 (en) | Monitoring device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220819 |
|
RJ01 | Rejection of invention patent application after publication |