Disclosure of Invention
The invention mainly aims to provide a vehicle collision monitoring method, a vehicle collision monitoring device, vehicle collision monitoring equipment and a computer readable storage medium, and aims to solve the technical problems that the existing vehicle collision monitoring technology has limitations and low prediction accuracy.
Further, to achieve the above object, the present invention also provides a vehicle collision monitoring method including the steps of:
the method comprises the steps of obtaining streaming media data of a target vehicle to be monitored, and preprocessing the streaming media data to obtain target monitoring data;
performing characteristic engineering processing on the target monitoring data to obtain characteristic monitoring data of the streaming media data;
acquiring judgment labeling information of the characteristic monitoring data and establishing a judgment rule;
and monitoring the target vehicle according to the judging and labeling information and the judging rule so as to obtain the collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information.
Optionally, the step of preprocessing the streaming media data to obtain target monitoring data includes:
identifying interference data from the streaming media data;
and filtering and cleaning the interference data to remove the interference data from the streaming media data to obtain target monitoring data.
Optionally, the step of performing feature engineering processing on the target monitoring data to obtain the feature monitoring data of the streaming media data includes:
separating the target monitoring data to strip out audio track data and image video data from the target monitoring data;
performing audio characteristic engineering processing on the audio track data to extract target audio track characteristic data from the audio track data;
performing image feature engineering processing on the image video data to extract target image feature data from the image video data;
and integrating the target audio track characteristic data and the target image characteristic data to obtain the characteristic monitoring data of the streaming media data.
Optionally, the step of performing audio feature engineering processing on the audio track data to extract target audio track feature data from the audio track data includes:
classifying the audio track data to obtain first audio track characteristic data;
identifying the first audio track characteristic data to obtain second audio track characteristic data;
and performing feature extraction processing on the second audio track feature data to obtain target audio track feature data.
Optionally, the step of performing image feature engineering processing on the image video data to extract image feature data from the image video data includes:
performing image preprocessing on the image video data to obtain first image characteristic data;
and carrying out target detection processing on the first image characteristic data to obtain target image characteristic data.
Optionally, the step of monitoring the target vehicle according to the discrimination marking information and the determination rule to obtain collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information includes:
detecting the characteristic monitoring data by using a preset monitoring model according to the distinguishing labeling information to obtain a prediction result of the collision monitoring information of the target vehicle, wherein the preset monitoring model is a target monitoring model obtained by performing iterative training on a preset basic monitoring model based on the streaming media data of the vehicle;
judging the prediction result according to the judgment rule to obtain collision monitoring information of the target vehicle;
and determining a target response scheme corresponding to the collision monitoring information according to the collision type and the collision grade in the collision detection information.
Optionally, the step of detecting the feature monitoring data by using a preset monitoring model according to the distinguishing mark information to obtain a prediction result of the collision monitoring information of the target vehicle includes:
detecting target audio track feature data in the feature monitoring data by using an audio monitoring model in a preset monitoring model to obtain a first prediction result;
detecting target image characteristic data in the characteristic monitoring data by using an image video monitoring model in a preset monitoring model to obtain a second prediction result;
and correlating the first prediction result with the second prediction result to obtain a prediction result of the collision detection information of the target vehicle.
Further, to achieve the above object, the present invention also provides a vehicle collision monitoring apparatus including:
the data preprocessing module is used for acquiring streaming media data of a target vehicle to be monitored and preprocessing the streaming media data to obtain target monitoring data;
the characteristic extraction module is used for carrying out characteristic engineering processing on the target monitoring data to obtain characteristic monitoring data of the streaming media data;
the label judging module is used for acquiring judging label information of the characteristic monitoring data and establishing a judging rule;
and the collision monitoring module is used for monitoring the target vehicle according to the judging and labeling information and the judging rule so as to obtain the collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information.
Further, to achieve the above object, the present invention also provides a vehicle collision monitoring apparatus including: a memory, a processor and a vehicle collision monitoring program stored on the memory and executable on the processor, the vehicle collision monitoring program when executed by the processor implementing the steps of the vehicle collision monitoring method as described above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a vehicle collision monitoring program, which when executed by a processor, implements the steps of the vehicle collision monitoring method as described above.
The embodiment of the invention provides a vehicle collision monitoring method, a vehicle collision monitoring device, vehicle collision monitoring equipment and a computer readable storage medium. Compared with the prior art, which only can detect the post-event collision or is only applicable to the monitoring technology of collision monitoring between vehicles, and has great limitation and prediction accuracy, in the embodiment of the invention, the streaming media data of the target vehicle to be monitored is obtained, the streaming media data is preprocessed to obtain the target monitoring data, then the target monitoring data is subjected to characteristic engineering processing to obtain the characteristic monitoring data of the streaming media data, the judgment marking information of the characteristic monitoring data is obtained, the judgment rule is established, and the target vehicle is monitored according to the judgment marking information and the judgment rule to obtain the collision monitoring information of the target vehicle and the target response scheme corresponding to the collision monitoring information. Through the image and audio data in the streaming media data, not only can the collision between the vehicles be monitored, but also the collision between the vehicles and other objects can be monitored, so that the limitation of the existing vehicle collision monitoring technology is solved, and the prediction accuracy of vehicle collision monitoring is improved. And the occurrence rate of accidents can be effectively reduced by predicting the vehicle collision, and meanwhile, by monitoring the vehicle collision, a target response scheme can be determined in time after the collision occurs, so that support is provided for field emergency rescue of collision events, censorship maintenance of collision vehicles, road dispute legal assistance and the like.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The vehicle collision monitoring terminal (also called terminal, equipment or terminal equipment) in the embodiment of the invention can be a PC (personal computer), and can also be a mobile terminal equipment with a display function, such as a smart phone, a tablet personal computer and a portable computer.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a vehicle collision monitoring program may be included in a memory 1005, which is one type of computer-readable storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke a vehicle collision monitoring program stored in the memory 1005 that, when executed by the processor, implements the operations in the vehicle collision monitoring methods provided by the embodiments described below.
Based on the hardware structure of the equipment, the embodiment of the vehicle collision monitoring method is provided.
Referring to fig. 2, in a first embodiment of the vehicle collision monitoring method of the present invention, the vehicle collision monitoring method includes:
step S10, acquiring streaming media data of a target vehicle to be monitored, and preprocessing the streaming media data to obtain target monitoring data;
the vehicle collision monitoring method is based on streaming media data, monitors the occurrence of a vehicle collision accident through the streaming media data, and comprises the prediction and the detection of the collision accident in advance and in the afterward, wherein the streaming media data comprises but is not limited to automobile data recorder data, and the streaming media data corresponding to a target vehicle acquired through an automobile data recorder of the target vehicle to be monitored comprises the environmental image video information and the audio information of the target vehicle in the driving process. Because a camera of a driving recorder generally has a certain shooting angle, image video information and audio information in the acquired streaming media data may not be completely corresponding, and because of complex variability of environmental factors, interference data may exist in the streaming media data, such as thunderstorm sound in rainy days, sound useless for collision monitoring inside a target vehicle, and the like, including but not limited to unreal data, data with too short time, and secondary processing data, and the existence of the interference data may affect the accuracy of vehicle collision accident prediction, particularly secondary processing data, and because information loss or reconstruction possibly existing in the secondary processing data may cause the generation of false information to affect the accuracy of a prediction result, therefore, the acquired streaming media data needs to be preprocessed, the preprocessing includes data compression and gaussian fuzzy processing, and then by making data rules or utilizing an algorithm model, and filtering and removing interference data from the streaming media data to obtain target monitoring data, wherein the target monitoring data is the processed streaming media data.
Step S20, performing characteristic engineering processing on the target monitoring data to obtain characteristic monitoring data of the streaming media data;
the method comprises the steps of carrying out feature engineering processing on target monitoring data obtained through preprocessing, extracting components with identification in the target monitoring data, and enhancing sound or image video with obvious features generated in vehicle collision by using image and audio processing technology due to the fact that the target monitoring data are also streaming media data and comprise image video data and audio data, so that the identification of the data is enhanced, and the identification degree of the streaming media data is enhanced.
Since there is a technical difference in processing image video data and audio data, the preprocessed streaming media data, that is, the target monitoring data, can be separated, and the image video data and the audio data in the target monitoring data are separated, since the video is composed of multiple frames of images, the processing of the video is actually the processing of multiple frames of images in the video, and therefore, the combination of the image data and the video data is hereinafter referred to as image data. Respectively performing feature engineering processing on the image data and the audio track data obtained by stripping, for example, classifying the audio track data, then performing identification detection processing to obtain feature data corresponding to the audio track data, classifying the image data, then performing target detection processing to obtain feature data corresponding to the image data, integrating the audio track data and the feature data corresponding to the image data to obtain feature monitoring data corresponding to the streaming media data of the target vehicle, wherein the feature monitoring data comprises object information in the image video data and an audio clip with obvious features in the audio track data, and the object information comprises vehicles, pedestrians, trees and the like.
Step S30, acquiring the discrimination marking information of the feature monitoring data and establishing a discrimination rule;
the method includes the steps of obtaining distinguishing and labeling information of the characteristic monitoring data, wherein the distinguishing and labeling information can be preset and includes but is not limited to a labeling information base formed by judging and labeling characteristic data in sample streaming media data through a manual or identification technology, for example, artificially labeling object information in the characteristic data corresponding to image data and characteristic audio in the characteristic data corresponding to audio track data, determining which object information and audio data indicate that a vehicle collision event is about to occur or occurs, such as manually labeling a vehicle head detection frame and a collision type, manually judging whether the collision event occurs or not, and classifying collision grades according to the size and frequency of a collision point sound. Alternatively, through image and audio recognition techniques, objects in the image are identified and labeled with object names such as "vehicle," "pedestrian," "tree," etc., and a determination is made as to which objects are present in the environment surrounding the target vehicle, how the respective objects are distributed, and which audio is present, indicating that a vehicle collision event is about to occur or has occurred. And meanwhile, establishing a judgment rule, judging whether the vehicle collision event occurs or not according to the judgment marking information of the characteristic monitoring data, and predicting the probability of the collision event if the vehicle collision event does not occur. For example, when a plurality of vehicles are present around the target vehicle and the traveling speed and direction of each vehicle are acquired, the traveling speed and traveling direction of a certain vehicle are determined singly, and whether a collision event occurs cannot be predicted accurately. By matching the characteristic monitoring data corresponding to the target vehicle with the data in the preset judging and marking information base, the judging and marking information of each characteristic in the characteristic monitoring data is obtained, and each characteristic information in the characteristic monitoring data is associated according to the established judging rule, the streaming media data corresponding to the surrounding environment of the target vehicle can be comprehensively analyzed, so that the accuracy of predicting the occurrence and occurrence probability of the collision event is improved.
Step S40, monitoring the target vehicle according to the judgment labeling information and the judgment rule to obtain collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information.
The method comprises the steps of monitoring a target vehicle, namely acquiring streaming media data of the target vehicle in real time, wherein the streaming media data comprise streaming media data in a driving recorder and streaming media data generated by a camera, a laser sensor and the like preset on the target vehicle, and preprocessing and characteristic engineering processing are carried out on the acquired streaming media data of the target vehicle to obtain characteristic monitoring data of the target vehicle. And analyzing the streaming media data of the target vehicle in real time according to the established judgment rule and the judgment marking information of the acquired characteristic monitoring data to acquire the collision monitoring information of the target vehicle, wherein the collision monitoring information of the target vehicle comprises the probability of the occurrence of a collision event, the collision type, the collision grade and the like, and the collision type comprises a front vehicle-vehicle collision, a front vehicle-object collision, a front vehicle-human collision, a side collision, a rear-end collision and the like.
After the collision monitoring information of the target vehicle is acquired, a corresponding target response scheme is determined, and different response schemes can be provided for different collision types and collision grades. For example, for a vehicle-object collision type collision event, when it is monitored that the occurrence probability of the collision event is high and exceeds a preset threshold, if the target vehicle is running fast, it is monitored that a fence is arranged in front of a road due to construction according to the acquired streaming media data of the target vehicle, a driver of the target vehicle is reminded in an alarm manner at a preset distance from the fence, and the alarm reminding of the driver is part of a response scheme. For another example, for a vehicle-human collision type collision event, when a pedestrian is detected in front of the target vehicle, the driver may be prompted to decelerate or stop driving at a preset distance from the pedestrian, since there is a possibility that the accelerator is mistakenly used as a brake in an emergency, the driver may perform a behavior opposite to the prompting information, when the target vehicle is detected to be in a state that the shortest braking distance between the target vehicle and the pedestrian is still not decelerated or even accelerated based on the current vehicle speed, it may be determined that the probability of the collision event occurring at this time is high, and the collision level is high, the target vehicle is controlled to perform emergency braking to prevent the collision event, and it also belongs to a part of a response scheme to alarm the driver and prompt and control the target vehicle to perform emergency braking.
When a collision event is detected according to the acquired streaming media data, such as the vehicle-to-person collision event, if the collision level is higher, the emergency rescue center and the overhaul center of the target vehicle are respectively communicated to request emergency rescue, or the current position information, the collision detection information such as the collision type and the collision level and the like are displayed to provide necessary information for the request support of the driver, the communication establishment or the display of the necessary information is also part of a response scheme of the collision event, and the emergency brake is performed by reminding the driver and the target vehicle through the alarm to form a relatively complete response scheme. That is, for a complete response scheme, pre-prevention and post-support should be included, that is, a more complete vehicle collision event response scheme should have measures for reducing the probability of collision when a possible collision event is detected, and provide support for the subsequent rescue after-the-fact work of the collision event when the collision event is detected.
The refinement of the step S10 comprises the steps A1-A2:
step A1, identifying interference data from the streaming media data;
step A2, filtering and cleaning the interference data to remove the interference data from the streaming media data to obtain target monitoring data.
The method comprises the steps of identifying interference data from the acquired streaming media data of a target vehicle, wherein the identification of the interference data can also be based on a preset distinguishing and marking information base, calling feature data and marking information of the interference data from the preset distinguishing and marking information base, matching the feature data and the marking information with the streaming media data of the target vehicle, identifying the interference data from the streaming media data of the target vehicle, filtering and cleaning the streaming media data of the target vehicle in the modes of intercepting, frame extracting and the like, and removing the interference data from the streaming media data to obtain target monitoring data.
The refinement of the step S40 includes the steps B1-B: 3:
step B1, detecting the characteristic monitoring data by using a preset monitoring model according to the distinguishing labeling information to obtain a prediction result of the collision monitoring information of the target vehicle, wherein the preset monitoring model is a target monitoring model obtained by performing iterative training on a preset basic monitoring model based on the streaming media data of the vehicle;
step B2, judging the prediction result according to the judgment rule to obtain the collision monitoring information of the target vehicle;
and step B3, determining a target response scheme corresponding to the collision monitoring information according to the collision type and the collision grade in the collision detection information.
According to the distinguishing labeling information of the feature monitoring data of the target vehicle, which is obtained from a preset distinguishing labeling information base, the feature monitoring data is detected by using a preset monitoring model to obtain a prediction result of the collision monitoring information of the target vehicle, wherein the prediction result of the collision monitoring information comprises but is not limited to the probability of a collision event, the collision type and the collision grade of the possible collision event and the probability corresponding to each collision type and collision grade, the preset monitoring model can be a target monitoring model obtained by carrying out iterative training on a preset basic monitoring model by taking the streaming media data of the vehicle as a sample, and the sample streaming media data comprises but is not limited to public vehicle data which can be obtained from the internet.
The method comprises the steps of performing correlation analysis on collision monitoring information according to a judgment rule established in advance, obtaining a final prediction result by performing correlation analysis on the prediction result, namely the collision type and the collision grade of a most probable collision event and the like, wherein the prediction result of the collision monitoring information of a target vehicle may comprise a plurality of collision types and collision grades, determining a final target response scheme according to the collision type and the collision grade in the prediction result, if the target response scheme comprises a precautionary measure, executing a precautionary measure in the target response scheme to prevent collision, and executing a post-collision response measure in the target response scheme when the collision event is detected.
After determining the prediction result of the collision monitoring information of the target vehicle, the final prediction result of the collision monitoring of the target vehicle is determined according to the determination rule shown as the following (equations 1-4), wherein the following is only an illustration of the determination rule in the embodiment and is not used to limit the determination rule in the embodiment of the present invention:
Pa1(Y_hat=1)=p (1)
Cx=argmax Pa2_C(Yc_hat=C) (2)
when p is greater than or equal to a preset threshold:
Vx=argmax Pv_V(Yv_hat=V) (3)
when p is less than a preset threshold:
Y_hat=0 (4)
where Pa1 denotes a predicted probability of collision of the apparatus vehicle, p denotes a probability value of Pa1, Cx is a predicted collision rank of the target vehicle x, Pa2_ C is a predicted probability when the collision rank is C, and Yc _ hat ═ C denotes that the collision rank is C; vx is the predicted collision type of the target vehicle x, Pv _ V represents the predicted probability when the collision type is V, and Yv _ hat ═ V represents the collision type is V; when Y _ hat is 1, the collision is generated, and when Y _ hat is 0, the collision is not generated; argmax (f (x)) is a function of the parameter x that maximizes the value of the function f (x), and is represented in equations 2-3 by the value that maximizes the probability of selecting a collision class or collision type.
Based on the judgment rule, when the probability of collision of the target vehicle is detected to be greater than or equal to a preset threshold value, collision monitoring information such as collision probability, collision type and the like is displayed to a user in a mode of alarming and the like so as to execute a target response scheme; and when the probability that the target vehicle is detected to be collided is smaller than the preset threshold value, judging that the target vehicle does not have a collision event. The collision monitoring information can be displayed to the user in real time, and when the probability of detecting the collision of the target vehicle is greater than or equal to the preset threshold value, the target response scheme is responded through an alarm and the like, and preventive measures are displayed to the user to reduce the probability of the collision.
The determination manner of the target response scheme is shown in table 1 below, where table 1 is only one of the determination manners of the target response scheme in this embodiment, and is not used to limit the determination manner or content of the response scheme in this embodiment of the present invention:
type of collision
|
Level of impact
|
Targeted response scheme
|
Frontal vehicle-to-vehicle collision
|
1
|
Response toScheme 1
|
Frontal vehicle-to-vehicle collision
|
2
|
Response scheme 2
|
Frontal vehicle-human collision
|
1
|
Response scheme 3
|
Frontal vehicle-human collision
|
2
|
Response scheme 4
|
Frontal vehicle-object collision
|
1
|
Response scheme 5
|
Frontal vehicle-object collision
|
2
|
Response scheme 6
|
Side/rear-end collision
|
1
|
Response scheme 7
|
Side/rear-end collision
|
2
|
Response scheme 8 |
TABLE 1
The specific contents of the response schemes 1 to 8 shown in table 1 may be set by a user according to a user requirement, for example, the user selects and enables response measures set in each scheme, such as alarming, emergency braking, communication establishment, and the like, in preset options according to a monitoring requirement of the user on the target vehicle, and the user in this embodiment may be a driver of the monitored target vehicle.
The refinement of the step B1 comprises the steps B11-B13:
step B11, detecting target audio track feature data in the feature monitoring data by using an audio monitoring model in a preset monitoring model to obtain a first prediction result;
step B12, detecting target image characteristic data in the characteristic monitoring data by using an image video monitoring model in a preset monitoring model to obtain a second prediction result;
and step B13, correlating the first prediction result with the second prediction result to obtain a prediction result of the collision detection information of the target vehicle.
Since different algorithms are used when extracting the vital signs from the audio data and the image data, the same, when the extracted feature data is identified and detected, different algorithm models are also needed to be used, the audio monitoring model in the preset monitoring model is utilized to monitor the target audio track feature data in the audio track feature data to obtain a voice identification result, wherein, one optimized model in the audio monitoring model is a DenseNet model, the DenseNet model has the characteristics of relieving gradient disappearance, strengthening feature propagation and emphasizing feature reuse, the DenseNet model is used for training target audio track feature data, so as to detect the characteristic data of the target audio track to obtain a voice recognition result corresponding to the audio track data, detecting whether collision occurs according to a voice recognition result, and if a collision event occurs, determining information such as collision grade and the like according to the voice recognition result; and monitoring target image characteristic data in the image data by using an image video monitoring model in a preset monitoring model to obtain a target detection result of the image data, and determining information such as collision types according to the target detection result. The optimal model of the image video monitoring model can be a Yolov3 (young Only Look one V3, third edition target detection algorithm model), the Yolov3 model has good identification accuracy for tiny objects and good depth detection function, can accurately identify and detect overlapped or close objects, detects target image characteristic data by using the Yolov3 model as the image video monitoring model, calculates the distance between a monitoring target and a target vehicle by combining a distance discrimination algorithm to obtain a target detection result, and associates the audio monitoring model and the image video monitoring model by using a preset monitoring model to associate a voice identification result and a target detection result to obtain a prediction result of a collision event.
When model training is carried out, after a prediction result of characteristic data is obtained, the prediction result is associated, including but not limited to reconstructing audio data and image data based on time axis information, the audio data and the image data in the same time period are combined to reconstruct complete streaming media data with the audio data and the image data, collision event site information is restored through data association to obtain image and audio characteristic data when a collision event occurs, and when a target vehicle is monitored, distinguishing and labeling information of the characteristic data is obtained from a distinguishing and labeling information base, the characteristic data with labeling information is associated, so that the probability of the collision event, the collision type and the collision grade, and the probability corresponding to the collision type and the collision grade can be determined.
In this embodiment, stream media data of a target vehicle to be monitored is acquired, the stream media data is preprocessed to obtain target monitoring data, then feature engineering processing is performed on the target monitoring data to obtain feature monitoring data of the stream media data, discrimination marking information of the feature monitoring data is acquired, a determination rule is established, and the target vehicle is monitored according to the discrimination marking information and the determination rule to acquire collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information. Through the image and audio data in the streaming media data, not only can the collision between the vehicles be monitored, but also the collision between the vehicles and other objects can be monitored, so that the limitation of the existing vehicle collision monitoring technology is solved, and the prediction accuracy of vehicle collision monitoring is improved. And the occurrence rate of accidents can be effectively reduced by predicting the vehicle collision, and meanwhile, by monitoring the vehicle collision, a target response scheme can be determined in time after the collision occurs, so that support is provided for field emergency rescue of collision events, censorship maintenance of collision vehicles, road dispute legal assistance and the like.
Further, referring to fig. 3, a second embodiment of the vehicle collision monitoring method of the present invention is proposed on the basis of the above-described embodiment of the present invention.
The present embodiment is a step of the first embodiment, which is a step of the step S20 refinement, and includes steps S21-S24:
step S21, performing separation processing on the target monitoring data to strip out audio track data and image video data from the target monitoring data;
step S22 of performing audio feature engineering processing on the track data to extract target track feature data from the track data;
step S23, image characteristic engineering processing is carried out on the image video data so as to extract target image characteristic data from the image video data;
step S24, integrating the target audio track feature data with the target image feature data to obtain feature monitoring data of the streaming media data.
The audio data and the image data included in the streaming media data of the target vehicle have different characteristics, and the characteristic data in the streaming media data can be extracted by using the characteristic engineering process. The method comprises the steps of stripping track data and image data from target monitoring data obtained after processing acquired original streaming media data, carrying out audio characteristic engineering processing on the stripped track data, extracting target track characteristic data from the track data, carrying out image characteristic engineering processing on the stripped image data, extracting target image characteristic data from the image data, and classifying the image data by using a classifier, wherein the image data classifier, such as the DenseNet model, combines and associates the target track characteristic data and the target image characteristic data obtained through the characteristic engineering processing to obtain characteristic monitoring data corresponding to the streaming media data of a target vehicle. Specifically, in a manner of merging and associating the target audio track feature data and the target image feature data, for example, taking the DenseNet model as an example, after extracting the feature data of the audio track data and the image data corresponding to the streaming media data, the DenseNet model is used to classify and identify the feature data of the audio track data and the image data, so as to obtain the feature monitoring data corresponding to the streaming media data of the target vehicle.
The refinement of the step S22 comprises the steps C1-C3:
step C1, the audio track data is classified to obtain the first audio track characteristic data;
step C2, identifying the first audio track characteristic data to obtain second audio track characteristic data;
and step C3, performing feature extraction processing on the second audio track feature data to obtain target audio track feature data.
In this embodiment, a preferred method for performing feature engineering processing on the audio track data is the mel-frequency cepstrum coefficient method, and when the feature engineering processing is performed on the audio track data by using the mel-frequency cepstrum coefficient method, the soundtrack data is first classified, for example, in terms of frequency, sound effect size or other distinctive characteristics, classifying the audio track data to obtain first audio track characteristic data, performing speech recognition processing on the first audio track characteristic data, for example, speech noise reduction and speech enhancement are performed on the soundtrack data using spectral subtraction, followed by sample compression, the sampled audio track data is firstly framed, the voice signal is framed, each frame signal is analyzed and processed, and then a window function is added, wherein the window function generally has a low-pass characteristic, and reducing signal leakage in a frequency domain through a windowing function, and finally performing Fourier transform to obtain second audio track characteristic data. And a data preprocessing stage which can be called as a Mel frequency cepstrum coefficient method and is used for carrying out voice classification and voice recognition on the audio track data, wherein after the audio track data is preprocessed to obtain second audio track characteristic data, the second audio track characteristic data is subjected to characteristic extraction processing to obtain target audio track characteristic data.
When the feature extraction processing is performed on the second audio track feature data, it is known that the original audio track data is generally a time domain signal, each frame of short time domain signal obtained by sampling is converted into a frequency domain signal by using a mel frequency cepstrum coefficient method, a mel frequency spectrogram corresponding to each frame of audio track data is obtained, a formant in the mel frequency spectrogram contains identification information of audio data, and by performing cepstrum analysis on the mel frequency spectrogram, the method comprises the steps of carrying out logarithm taking and inverse transformation calculation on the frequency of a voice signal in a spectrogram to obtain a Mel frequency cepstrum coefficient of each frame of audio track data, wherein the Mel frequency cepstrum coefficient comprises formant information with identification information, the obtained Mel frequency cepstrum coefficient is characteristic data corresponding to the frame of audio track data, and the collision occurrence probability and the collision grade in collision monitoring information can be monitored based on the audio track characteristic data.
The refinement of the step S23 includes the steps D1-D2:
step D1, carrying out image preprocessing on the image video data to obtain first image characteristic data;
and D2, performing target detection processing on the first image characteristic data to obtain target image characteristic data.
When feature engineering processing is performed on image data, as with audio track data, preprocessing needs to be performed on the image data, for example, an image with a preset resolution is extracted from the image data, compression and gaussian blurring processing are performed on the image data, then missing information is filled up by means of a fixed frame rate resampling method through modes of intercepting, frame-extracting, windowing and the like to obtain first image feature data, target detection processing is performed on the basis of the first image feature data through a YOLOV3 model, each target object and the distance between the target object and a target vehicle are identified from the image data to obtain target feature image data, and the collision type in collision monitoring information can be monitored on the basis of the target feature data.
It should be noted that, when performing feature engineering processing on audio track data and image data, both the data and the image data are compressed, because when streaming media is transmitted based on a 5G network, data transmission requirements of real-time monitoring can be met without compressing the data, but after compressing the data, the transmission rate of the data can be effectively increased, the transmission time can be reduced, and thus the influence on the monitoring effect when the network signal is unstable can be reduced.
In this embodiment, the target monitoring data is subjected to separation processing to strip audio track data and image video data from the target monitoring data, audio feature engineering processing is performed on the audio track data to extract target audio track feature data from the audio track data, image feature engineering processing is performed on the image video data to extract target image feature data from the image video data, and the target audio track feature data and the target image feature data are integrated to obtain the feature monitoring data of the streaming media data. By carrying out feature engineering processing on the streaming media data, extracting feature monitoring data in the streaming media data and monitoring the collision event of the target vehicle based on the feature monitoring data, the prediction accuracy of collision monitoring information in the monitoring process can be effectively improved.
Further, referring to fig. 4, an embodiment of the present invention also proposes a vehicle collision monitoring apparatus including:
the data preprocessing module 10 is configured to acquire streaming media data of a target vehicle to be monitored, and preprocess the streaming media data to obtain target monitoring data;
the feature extraction module 20 is configured to perform feature engineering processing on the target monitoring data to obtain feature monitoring data of the streaming media data;
a label judging module 30, configured to obtain judgment label information of the feature monitoring data and establish a judgment rule;
and the collision monitoring module 40 is configured to monitor the target vehicle according to the discrimination marking information and the discrimination rule to obtain collision monitoring information of the target vehicle and a target response scheme corresponding to the collision monitoring information.
Optionally, the data preprocessing module 10 includes:
an interference identification unit, configured to identify interference data from the streaming media data;
and the data removing unit is used for filtering and cleaning the interference data so as to remove the interference data from the streaming media data to obtain target monitoring data.
Optionally, the feature extraction module 20 includes:
the data stripping unit is used for carrying out separation processing on the target monitoring data so as to strip out audio track data and image video data from the target monitoring data;
a track feature extraction unit configured to perform audio feature engineering processing on the track data to extract target track feature data from the track data;
the image feature extraction unit is used for carrying out image feature engineering processing on the image video data so as to extract target image feature data from the image video data;
and the data association unit is used for integrating the target audio track characteristic data and the target image characteristic data to obtain the characteristic monitoring data of the streaming media data.
Optionally, the audio feature extraction unit includes:
the enhancement sampling subunit is used for carrying out classification processing on the audio track data to obtain first audio track characteristic data;
the framing and windowing subunit is used for identifying and processing the first audio track characteristic data to obtain second audio track characteristic data;
and the audio track feature extraction subunit is used for performing feature extraction processing on the second audio track feature data to obtain target audio track feature data.
Optionally, the image feature extraction unit includes:
the sampling compression subunit is used for carrying out image preprocessing on the image video data to obtain first image characteristic data;
and the image feature extraction subunit is used for carrying out target detection processing on the first image feature data to obtain target image feature data.
Optionally, the collision monitoring module 40 includes:
the model prediction unit is used for detecting the characteristic monitoring data by using a preset monitoring model according to the distinguishing marking information to obtain a prediction result of the collision monitoring information of the target vehicle, wherein the preset monitoring model is a target monitoring model obtained by performing iterative training on a preset basic monitoring model based on the streaming media data of the vehicle;
a prediction result determination unit configured to determine the prediction result according to the determination rule to obtain collision monitoring information of the target vehicle;
and the collision information acquisition unit is used for determining a target response scheme corresponding to the collision monitoring information according to the collision type and the collision grade in the collision detection information.
Optionally, the model prediction unit includes:
the audio track characteristic detection subunit is used for detecting target audio track characteristic data in the characteristic monitoring data by using an audio monitoring model in a preset monitoring model to obtain a first prediction result;
the image characteristic detection subunit is used for detecting target image characteristic data in the characteristic monitoring data by using an image video monitoring model in a preset monitoring model to obtain a second prediction result;
and the result correlation subunit is used for correlating the first prediction result with the second prediction result to obtain a prediction result of the collision detection information of the target vehicle.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a vehicle collision monitoring program is stored, and when executed by a processor, the computer-readable storage medium implements the operations in the vehicle collision monitoring method provided in the foregoing embodiment.
The method executed by each program module can refer to each embodiment of the method of the present invention, and is not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects; the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, in that elements described as separate components may or may not be physically separate. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be substantially or partially embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the vehicle collision monitoring method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.