[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116189114B - Method and device for identifying collision trace of vehicle - Google Patents

Method and device for identifying collision trace of vehicle Download PDF

Info

Publication number
CN116189114B
CN116189114B CN202310433116.3A CN202310433116A CN116189114B CN 116189114 B CN116189114 B CN 116189114B CN 202310433116 A CN202310433116 A CN 202310433116A CN 116189114 B CN116189114 B CN 116189114B
Authority
CN
China
Prior art keywords
vehicle
collision
information
image
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310433116.3A
Other languages
Chinese (zh)
Other versions
CN116189114A (en
Inventor
李师可
熊庆
廖文俊
李平飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202310433116.3A priority Critical patent/CN116189114B/en
Publication of CN116189114A publication Critical patent/CN116189114A/en
Application granted granted Critical
Publication of CN116189114B publication Critical patent/CN116189114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for identifying vehicle collision traces, which relate to the technical field of trace identification and comprise the steps of acquiring environmental information and vehicle image information of a vehicle collision site; preprocessing the environmental information of the collision site of the vehicle, and determining the running direction of the vehicle to obtain the running direction information of the collision vehicle; the vehicle image information is sent to an image recognition module for distinguishing image recognition, and the distinguishing image and the traveling direction information of the collision vehicle are analyzed to obtain the traveling speed of the collision vehicle; the method reduces manual operation, improves objectivity of data, reduces a threshold for identifying trace, and greatly improves efficiency of trace analysis.

Description

Method and device for identifying collision trace of vehicle
Technical Field
The invention relates to the technical field of trace identification, in particular to a method and a device for identifying collision trace of a vehicle.
Background
At present, in the traffic accident identification, trace identification belongs to a very important link in judicial identification, and the role of the trace identification can be used for judging the responsibility division of both sides of the traffic accident, the accident reproduction, the accident process analysis and the like. However, at present, the analysis of the trace is more based on the experience of the identifier, the method is mainly based on visual inspection and touch, and part of the identifier can use a low magnification magnifying glass to microscopically check the trace, so that a great deal of manpower and material resources are required depending on the experience of the identifier, therefore, a device capable of automatically identifying the collision trace of the vehicle is required, data support is provided for personnel judgment responsibility division, and the subjectivity of manual judgment is reduced.
Disclosure of Invention
The invention aims to provide a vehicle collision trace identification method and device for improving the problems. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in one aspect, the present application provides a method for identifying a collision trace of a vehicle, including:
acquiring environment information and vehicle image information of a vehicle collision site, wherein the environment information of the vehicle collision site comprises road surface tyre print image information and road surface residue image information, and the vehicle image information comprises image information before a vehicle collision and image information after the vehicle collision;
preprocessing the environmental information of the vehicle collision site, and sending the preprocessed environmental information of the vehicle collision site to a running direction determining module for determining the running direction of the vehicle to obtain the running direction information of the collision vehicle;
the vehicle image information is sent to an image recognition module for distinguishing image recognition, and the distinguishing image and the traveling direction information of the collision vehicle are sent to a vehicle traveling speed analysis module for analysis, so that the traveling speed of the collision vehicle is obtained;
and sending the driving direction information of the collision vehicle and the driving speed of the collision vehicle to a data storage module for storage, and sending a first command, wherein the first command is a command for prompting staff to divide responsibility based on the data in the data storage module.
In another aspect, the present application also provides a vehicle collision trace identifying apparatus, including:
an acquisition unit configured to acquire environmental information of a vehicle collision site and vehicle image information, the environmental information of the vehicle collision site including road surface tire mark image information and road surface residue image information, the vehicle image information including image information before a vehicle collision and image information after the vehicle collision;
the first processing unit is used for preprocessing the environmental information of the vehicle collision site, and sending the preprocessed environmental information of the vehicle collision site to the running direction determining module for determining the running direction of the vehicle to obtain the running direction information of the collision vehicle;
the second processing unit is used for sending the vehicle image information to the image recognition module for distinguishing image recognition, and sending the distinguishing image and the driving direction information of the collision vehicle to the vehicle driving speed analysis module for analysis to obtain the driving speed of the collision vehicle;
and the third processing unit is used for sending the driving direction information of the collision vehicle and the driving speed of the collision vehicle to the data storage module for storage and sending a first command, wherein the first command is a command for prompting staff to divide responsibility based on the data in the data storage module.
The beneficial effects of the invention are as follows:
according to the method, the environment information of the vehicle collision site and the image information before and after the vehicle collision are analyzed, the environment information is classified through the classification model, the vehicles corresponding to the environment information are further determined, the corresponding trace of each vehicle is guaranteed to be accurately corresponding, the moving direction of the vehicle is further determined according to the corresponding trace of each vehicle, then the running speed of each vehicle is analyzed based on the corresponding moving direction of each vehicle and the image difference before and after the collision, and further preparation is made for responsibility division, manual operation is reduced, objectivity of data is improved, waste of manpower and material resources is reduced, the threshold for identifying the trace is reduced, and a large amount of working experience is not required for each staff.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying vehicle crash trace according to an embodiment of the invention;
fig. 2 is a schematic structural view of a vehicle collision trace identifying apparatus according to an embodiment of the present invention.
The marks in the figure: 701. an acquisition unit; 702. a first processing unit; 703. a second processing unit; 704. a third processing unit; 7021. a first processing subunit; 7022. a first computing subunit; 7023. a second computing subunit; 7024. a second processing subunit; 70241. a third processing subunit; 70242. a fourth processing subunit; 70243. a first analysis subunit; 70244. a second analysis subunit; 702441, a first acquisition subunit; 702442, fifth processing subunit; 702443, sixth processing subunit; 7031. a seventh processing subunit; 7032. an eighth processing subunit; 7033. a third analysis subunit; 7034. and a ninth processing subunit.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1:
the present embodiment provides a vehicle collision trace identifying method, referring to fig. 1, which shows that the method includes steps S1, S2, S3 and S4.
S1, acquiring environment information and vehicle image information of a vehicle collision site, wherein the environment information of the vehicle collision site comprises road surface tire mark image information and road surface residue image information, and the vehicle image information comprises image information before a vehicle collision and image information after the vehicle collision;
it can be understood that the data information of the collision site of the vehicle is acquired through the data acquisition equipment and then uploaded to the database for storage, wherein the method also needs to upload the image before the collision of the vehicle and the information such as the type of the collided vehicle, and the like, so that the movement track of each collided vehicle can be determined in the trace identification process, and the accuracy of trace identification is improved.
S2, preprocessing the environmental information of the vehicle collision site, and sending the preprocessed environmental information of the vehicle collision site to a running direction determining module for determining the running direction of the vehicle to obtain the running direction information of the collision vehicle;
it can be understood that this step determines the collision vehicles corresponding to each environmental information by classifying the environmental information of the collision site of the vehicles, and further rapidly determines the driving direction of each collision vehicle, where, since more than one vehicle may collide during the collision of the vehicles, it is necessary to correspond all the environmental information corresponding to the vehicles, and in this step, step S2 includes step S21, step S22, step S23 and step S24.
Step S21, transmitting all road surface tyre print image information of the vehicle collision site to an image recognition module for image recognition, wherein the road surface tyre print image information is classified to obtain at least two types of tyre print image information;
it can be understood that this step is to perform image recognition on all the road surface tyre print image information, where the road surface tyre print image information is subjected to gray level conversion, and then all the gray level images are connected according to different gray level values, so as to obtain a contour image of the tyre print in each gray level image, and classify based on the contour image, so as to obtain at least two types of tyre print images.
Step S22, performing self-adaptive gradient update on the classification model by adopting SGD computing gradient, and sending the classified tire mark image information to the updated classification model for iterative processing, wherein the iterative processing step is to perform feature engineering processing based on the classified tire mark image, determine feature vectors required by iterative training, and perform iterative processing on the feature vectors by adopting an AUC algorithm to obtain an AUC value reaching the maximum iterative times;
it can be understood that the step is to update and optimize the classification model through the SGD gradient to obtain an updated classification model, wherein the updated classification model is subjected to iterative optimization for multiple times through the AUC algorithm to obtain a classification model updated by multiple times, so as to obtain an optimal classification model, and further reclassify the tire marks to obtain the most accurate classification.
Step S23, calculating weight values of each tire print image information accounting for all road surface tire print image information, and comparing and classifying the classification weight value of each tire print image information with an AUC value reaching the maximum iteration number to obtain the category information of all tire prints;
it can be understood that the step is to update the classification model by adopting the SGD computing gradient, optimize the classification model, convert the classified first information into the feature vector, then use the AUC algorithm to iterate the feature vector to obtain the optimal AUC value, use the optimal AUC value as the threshold value, and compare with the classified first information after the weight computation to obtain the most accurate classification information, so that the classification can not only classify the tire print image, but also ensure the classification precision and the accuracy of the subsequent computation.
And step S24, transmitting the category information of all the tyre marks and the road surface residue image information to a running direction determining module for determining the running direction of the vehicle, and obtaining the running direction information of each collision vehicle.
It can be understood that the step determines the vehicle corresponding to each tire mark by analyzing the type of the tire mark and the road surface residue, and further determines the driving direction corresponding to each collision vehicle, so as to ensure the accuracy of the driving direction of each vehicle and increase the objectivity of the judgment, and in the step, step S24 includes step S241, step S242, step S243 and step S244.
Step S241, carrying out frame selection on all targets in the category information of all tire marks and the image information of the pavement residues, and carrying out key point identification on all the targets selected by the frame selection to obtain key points of the tire marks and the key points of the pavement residues in each frame selection target image;
it will be appreciated that this step determines key points of each tire mark and each road surface residue, such as a center point of the tire mark, a center point of the road surface residue, a contour point of the tire mark, a contour point of the road surface residue, and the like, by performing frame selection analysis on the category information of all the tire marks and the road surface residue image information.
Step S242, performing track fitting on key points of the tire marks and key points of the road surface residues in each frame selected target image by adopting a Bezier curve to obtain a motion curve of the tire marks and a motion curve of the road surface residues;
it can be understood that in this step, all the key points are subjected to track fitting through a bezier curve to obtain a fitted curve, and then the running direction of the vehicle is determined based on the fitted curve.
Step S243, respectively carrying out association analysis on the motion curve of the tire mark and the motion curve of the road surface residue and preset model information of each collision vehicle to obtain the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle;
the method can be used for determining the vehicle type and the corresponding vehicle information according to the motion curve of the contour points of the tire marks and the motion curve of the road surface residues, so that the accurate judgment of which vehicles collide is achieved, and further, the preparation is made for the subsequent running speed analysis and the running direction analysis.
Step S244, the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle are sent to the trained direction recognition model for analysis, and the corresponding driving direction information of each collision vehicle is obtained.
It can be understood that the step of judging the direction of the motion curve of the tire mark and the direction of the motion curve of the road surface residue corresponding to each collision vehicle through the trained direction recognition model reduces subjectivity of manual judgment and improves judgment efficiency while rapidly judging, and in the step, the step S244 includes a step S2441, a step S2442 and a step S2443.
Step S2441, acquiring a motion curve of a historical tire mark and a motion curve of a historical road surface residue, screening vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue, and carrying out direction calibration on the vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue to obtain calibrated direction information;
step S2442, processing the calibrated direction information based on a CART algorithm to obtain a CART decision tree, performing random pruning processing on the CART decision tree, and determining a constant of the CART decision tree to obtain at least one untrained sub decision tree;
step S2443, obtaining an optimal sub-decision tree based on the untrained sub-decision tree and a base index calculation method, and obtaining the direction recognition model based on the optimal sub-decision tree, wherein the direction recognition model comprises the optimal sub-decision tree and a target constant corresponding to the optimal sub-decision tree.
It can be understood that the direction corresponding to each motion curve is determined by marking the historical data, and then the direction corresponding to each motion curve is rapidly judged through judgment training of the decision tree, so that the judgment efficiency is improved.
Step S3, the vehicle image information is sent to an image recognition module for distinguishing image recognition, and the distinguishing image and the traveling direction information of the collision vehicle are sent to a vehicle traveling speed analysis module for analysis, so that the traveling speed of the collision vehicle is obtained;
it is understood that the present step is to judge the speed information of the vehicle collision based on the collision change information of the vehicle by recognizing the vehicle image information, and in the present step, step S3 includes step S31, step S32, step S33, and step S34.
Step S31, performing image recognition on the image information before the vehicle collision and the image information after the vehicle collision based on a YOLOV3 algorithm to obtain a distinguishing image before the vehicle collision and after the vehicle collision;
it can be understood that the image recognition is performed on the image information before the vehicle collision and the image information after the vehicle collision through the YOLOV3 algorithm, so that the difference image before and after the vehicle collision of is quickly removed.
It can be understood that the YOLOV3 algorithm in this step performs image recognition by identifying each difference image in the historical data, and then predicts the difference image of each component based on the convolutional neural network, where the steps include performing frame prediction by using three feature layers, where the sizes of the three predicted feature layers are 52, 26 and 13, respectively, where the prediction is performed by (4+1+c) k convolution kernels with a size of 1*1, where k is the number of preset boundary frames, where the number of boundary frames defaults to 3, c is the number of classes of the predicted target, where 4k parameters are responsible for predicting the offset of the boundary frame of the predicted target, k parameters are responsible for predicting the probability of containing the target in the boundary frame of the predicted target, ck parameters are responsible for predicting the probability of the k preset boundary frames corresponding to c classes of the target, and after predicting the boundary frame, performing calculation of a loss function and prediction on each difference image by the convolutional neural network, and determining the regional position information occupied by each difference image.
Step S32, a three-dimensional space rectangular coordinate system is established based on the image information before the vehicle collision, and the coordinates of the difference image in the three-dimensional space rectangular index system are determined, so that the size information of the difference image is obtained;
step S33, carrying out association analysis on the preset running direction information of the historical collision vehicle and the preset size information of the historical difference image respectively with the running speeds of the historical collision vehicles to obtain the corresponding relation between the running speed of each historical collision vehicle and the running direction information of the historical collision vehicle and the size information of the historical difference image respectively;
it can be understood that in this step, the dimensional information of each difference image is determined by establishing a rectangular coordinate system in three-dimensional space, and then the position change size of the vehicle before and after collision is determined, and then the correlation analysis is performed based on the historical data, and then the correlation degree between the position change size of the vehicle before and after collision, the travel direction information of the historical collision vehicle and the travel speed of the historical collision vehicle is determined, and then the travel speed corresponding to the change size with the largest correlation degree, for example, the travel direction of the collision vehicle is opposite, the travel speed of the collision position change size is 40cm, and the travel speed with the largest correlation degree is 100km per hour, and then the opposite collision vehicle is indicated, and when the dimension of the difference image is 40cm, the relative travel speed of the vehicle is 100km per hour.
Step S34, determining a traveling speed of the collision vehicle corresponding to the size information of the difference image and the traveling direction information of the collision vehicle based on the correspondence relation.
It can be understood that the size information of the difference image and the driving direction information of the collision vehicle are brought into the corresponding relation of each driving speed, so that the driving speed corresponding to the size information of the current difference image and the driving direction information of the current collision vehicle is determined, and further preparation is provided for the subsequent responsibility identification.
And S4, transmitting the driving direction information of the collision vehicle and the driving speed of the collision vehicle to a data storage module for storage, and transmitting a first command, wherein the first command is a command for prompting staff to divide responsibility based on the data in the data storage module.
Example 2:
as shown in fig. 2, the present embodiment provides a vehicle collision trace identifying apparatus including an acquisition unit 701, a first processing unit 702, a second processing unit 703, and a third processing unit 704.
An acquisition unit 701 for acquiring environmental information of a vehicle collision site including road surface tire mark image information and road surface residue image information and vehicle image information including image information before a vehicle collision and image information after a vehicle collision;
the first processing unit 702 is configured to pre-process the environmental information of the vehicle collision site, and send the pre-processed environmental information of the vehicle collision site to the driving direction determining module for determining the driving direction of the vehicle, so as to obtain driving direction information of the collision vehicle;
the second processing unit 703 is configured to send the vehicle image information to an image recognition module for distinguishing image recognition, and send the distinguishing image and the traveling direction information of the collision vehicle to a vehicle traveling speed analysis module for analysis, so as to obtain the traveling speed of the collision vehicle;
and the third processing unit 704 is configured to send the driving direction information of the collision vehicle and the driving speed of the collision vehicle to the data storage module for storage, and send a first command, where the first command is a command for prompting a worker to divide responsibility based on data in the data storage module.
The first processing unit 702 includes a first processing subunit 7021, a first computing subunit 7022, a second computing subunit 7023, and a second processing subunit 7024.
A first processing subunit 7021, configured to send all pieces of road surface tire print image information of the vehicle collision site to an image recognition module for image recognition, where all pieces of road surface tire print image information are classified to obtain at least two types of tire print image information;
the first calculating subunit 7022 is configured to perform adaptive gradient update on the classification model by adopting an SGD to calculate a gradient, and send the classified tire print image information to the updated classification model for performing iterative processing, where the step of iterative processing is performing feature engineering processing based on the classified tire print image, determining a feature vector required by iterative training, and performing iterative processing on the feature vector by adopting an AUC algorithm to obtain an AUC value reaching the maximum iterative times;
the second calculating subunit 7023 is configured to calculate a weight value of each piece of tire print image information in all pieces of road surface tire print image information, and compare and classify the classification weight value of each piece of tire print image information with an AUC value reaching the maximum iteration number to obtain class information of all tire prints;
the second processing subunit 7024 is configured to send the category information and the road surface residue image information of all the tire marks to the driving direction determining module for determining the driving direction of the vehicle, so as to obtain driving direction information of each collision vehicle.
The second processing subunit 7024 includes a third processing subunit 70241, a fourth processing subunit 70242, a first analysis subunit 70243, and a second analysis subunit 70244.
The third processing subunit 70241 is configured to perform frame selection on all objects in the category information of all the tire marks and the image information of the road surface residue, and perform key point recognition on the objects selected by all the frames, so as to obtain key points of the tire marks and key points of the road surface residue in each frame-selected object image;
a fourth processing subunit 70242, configured to perform track fitting on the key points of the tire mark and the key points of the road surface residue in each frame target image by using a bezier curve, so as to obtain a motion curve of the tire mark and a motion curve of the road surface residue;
the first analysis subunit 70243 is configured to perform association analysis on the motion curve of the tire mark and the motion curve of the road surface residue respectively with preset model information of each collision vehicle, so as to obtain a motion curve of the tire mark and a motion curve of the road surface residue corresponding to each collision vehicle;
and the second analysis subunit 70244 is configured to send the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle to the trained direction recognition model for analysis, so as to obtain corresponding driving direction information of each collision vehicle.
The second analyzing subunit 70244 includes a first acquiring subunit 702441, a fifth processing subunit 702442, and a sixth processing subunit 702443.
The first obtaining subunit 702441 is configured to obtain a motion curve of a historical tire mark and a motion curve of a historical road surface residue, screen vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue, and perform direction calibration on the vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue to obtain calibrated direction information;
a fifth processing subunit 702442, configured to process the calibrated direction information based on a CART algorithm to obtain a CART decision tree, perform random pruning processing on the CART decision tree, and determine a constant of the CART decision tree to obtain at least one untrained sub decision tree;
a sixth processing subunit 702443, configured to obtain an optimal sub-decision tree based on the untrained sub-decision tree and a keni index calculation method, and obtain the direction recognition model based on the optimal sub-decision tree, where the direction recognition model includes the optimal sub-decision tree and a target constant corresponding to the optimal sub-decision tree.
The second processing unit 703 includes a seventh processing subunit 7031, an eighth processing subunit 7032, a third analysis subunit 7033, and a ninth processing subunit 7034.
A seventh processing subunit 7031, configured to perform image recognition on the image information before the vehicle collision and the image information after the vehicle collision based on a YOLOV3 algorithm, so as to obtain a difference image between the image before the vehicle collision and the image after the vehicle collision;
an eighth processing subunit 7032, configured to establish a three-dimensional space rectangular coordinate system based on the image information before the vehicle collision, determine coordinates of the difference image in the three-dimensional space rectangular index system, and further obtain size information of the difference image;
a third analysis subunit 7033, configured to perform association analysis on the preset running direction information of the historical collision vehicle and the preset size information of the historical difference image respectively with the running speeds of the historical collision vehicles, so as to obtain a corresponding relationship between the running speed of each historical collision vehicle and the running direction information of the historical collision vehicle and the size information of the historical difference image respectively;
a ninth processing subunit 7034 is configured to determine, based on the correspondence relationship, a travel speed of the collision vehicle corresponding to the size information of the difference image and the travel direction information of the collision vehicle.
It should be noted that, regarding the apparatus in the above embodiments, the specific manner in which the respective modules perform the operations has been described in detail in the embodiments regarding the method, and will not be described in detail herein.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. A method for identifying a collision trace of a vehicle, comprising:
acquiring environment information and vehicle image information of a vehicle collision site, wherein the environment information of the vehicle collision site comprises road surface tyre print image information and road surface residue image information, and the vehicle image information comprises image information before a vehicle collision and image information after the vehicle collision;
preprocessing the environmental information of the vehicle collision site, and sending the preprocessed environmental information of the vehicle collision site to a running direction determining module for determining the running direction of the vehicle to obtain the running direction information of the collision vehicle;
the vehicle image information is sent to an image recognition module for distinguishing image recognition, and the distinguishing image and the traveling direction information of the collision vehicle are sent to a vehicle traveling speed analysis module for analysis, so that the traveling speed of the collision vehicle is obtained;
transmitting the traveling direction information of the collision vehicle and the traveling speed of the collision vehicle to a data storage module for storage, and transmitting a first command, wherein the first command is a command for prompting staff to divide responsibility based on data in the data storage module;
the method for identifying the difference image comprises the steps of sending the vehicle image information to an image identification module for carrying out difference image identification, sending the difference image and the driving direction information of the collision vehicle to a vehicle driving speed analysis module for analysis to obtain the driving speed of the collision vehicle, and comprises the following steps:
performing image recognition on the image information before the vehicle collision and the image information after the vehicle collision based on a YOLOV3 algorithm to obtain a distinguishing image before the vehicle collision and after the vehicle collision;
establishing a three-dimensional space rectangular coordinate system based on the image information before the vehicle collision, and determining the coordinates of the distinguishing image in the three-dimensional space rectangular index system so as to obtain the size information of the distinguishing image;
carrying out association analysis on the preset running direction information of the historical collision vehicle and the preset size information of the historical difference image respectively with the running speeds of the historical collision vehicles to obtain the corresponding relation between the running speeds of the historical collision vehicles and the running direction information of the historical collision vehicles and the size information of the historical difference image respectively;
and determining the driving speed of the collision vehicle corresponding to the size information of the difference image and the driving direction information of the collision vehicle based on the corresponding relation.
2. The method for identifying the collision trace of the vehicle according to claim 1, wherein preprocessing the environmental information of the collision site of the vehicle, and transmitting the preprocessed environmental information of the collision site of the vehicle to a driving direction determining module for determining the driving direction of the vehicle, and obtaining the driving direction information of the collision vehicle, comprises:
transmitting all road surface tyre print image information of the vehicle collision site to an image recognition module for image recognition, wherein the road surface tyre print image information is classified to obtain at least two types of tyre print image information;
adopting SGD computing gradient to carry out self-adaptive gradient update on the classification model, and sending the classified tire print image information to the updated classification model for iterative processing, wherein the iterative processing step is to carry out characteristic engineering processing based on the classified tire print image, determine a characteristic vector required by iterative training, and carry out iterative processing on the characteristic vector by adopting an AUC algorithm to obtain an AUC value reaching the maximum iterative times;
calculating weight values of each tire print image information accounting for all road surface tire print image information, comparing and classifying the classification weight value of each tire print image information with the AUC value reaching the maximum iteration number, taking the AUC value of the maximum iteration number as a threshold value, and comparing with the classified first information after weight calculation to obtain the category information of all tire prints;
and transmitting the category information of all the tire marks and the road surface residue image information to a driving direction determining module for determining the driving direction of the vehicle, and obtaining the driving direction information of each collision vehicle.
3. The method for identifying the collision trace of the vehicle according to claim 2, wherein the step of transmitting the category information of all the tire marks and the road surface residue image information to the traveling direction determining module to determine the traveling direction of the vehicle, and obtaining the traveling direction information of each collision vehicle, comprises:
performing frame selection on all targets in the category information of all tire marks and the image information of the road surface residues, and performing key point identification on all the selected targets to obtain key points of the tire marks and the key points of the road surface residues in each frame selection target image;
respectively carrying out track fitting on key points of the tire marks and key points of the road surface residues in each frame selected target image by adopting a Bezier curve to obtain a motion curve of the tire marks and a motion curve of the road surface residues;
respectively carrying out association analysis on the motion curve of the tire mark and the motion curve of the road surface residue and preset model information of each collision vehicle to obtain the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle;
and transmitting the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle to the trained direction recognition model for analysis, and obtaining the corresponding driving direction information of each collision vehicle.
4. The method for identifying the collision trace of the vehicle according to claim 3, wherein the method for constructing the trained direction recognition model comprises the following steps:
acquiring a motion curve of a historical tire mark and a motion curve of historical road surface residues, screening vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residues, and carrying out direction calibration on the vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residues to obtain calibrated direction information;
processing the calibrated direction information based on a CART algorithm to obtain a CART decision tree, performing random pruning processing on the CART decision tree, and determining a constant of the CART decision tree to obtain at least one untrained sub decision tree;
obtaining an optimal sub-decision tree based on the untrained sub-decision tree and a base index calculation method, and obtaining the direction recognition model based on the optimal sub-decision tree, wherein the direction recognition model comprises the optimal sub-decision tree and a corresponding target constant.
5. A vehicle crash trace evaluation apparatus, comprising:
an acquisition unit configured to acquire environmental information of a vehicle collision site and vehicle image information, the environmental information of the vehicle collision site including road surface tire mark image information and road surface residue image information, the vehicle image information including image information before a vehicle collision and image information after the vehicle collision;
the first processing unit is used for preprocessing the environmental information of the vehicle collision site, and sending the preprocessed environmental information of the vehicle collision site to the running direction determining module for determining the running direction of the vehicle to obtain the running direction information of the collision vehicle;
the second processing unit is used for sending the vehicle image information to the image recognition module for distinguishing image recognition, and sending the distinguishing image and the driving direction information of the collision vehicle to the vehicle driving speed analysis module for analysis to obtain the driving speed of the collision vehicle;
the third processing unit is used for sending the driving direction information of the collision vehicle and the driving speed of the collision vehicle to the data storage module for storage and sending a first command, wherein the first command is a command for prompting staff to divide responsibility based on the data in the data storage module;
wherein the second processing unit includes:
a seventh processing subunit, configured to perform image recognition on the image information before the vehicle collision and the image information after the vehicle collision based on a YOLOV3 algorithm, so as to obtain a difference image between the image before the vehicle collision and the image after the vehicle collision;
an eighth processing subunit, configured to establish a three-dimensional space rectangular coordinate system based on the image information before the vehicle collision, determine coordinates of the difference image in the three-dimensional space rectangular index system, and further obtain size information of the difference image;
a third analysis subunit, configured to perform association analysis on the preset running direction information of the historical collision vehicle and the preset size information of the historical difference image respectively with the running speeds of the historical collision vehicles, so as to obtain a corresponding relationship between the running speed of each historical collision vehicle and the running direction information of the historical collision vehicle and the size information of the historical difference image respectively;
and a ninth processing subunit configured to determine, based on the correspondence, a travel speed of the collision vehicle corresponding to the size information of the difference image and the travel direction information of the collision vehicle.
6. The vehicle crash trace identification apparatus as set forth in claim 5, wherein the apparatus includes:
the first processing subunit is used for sending all the road surface tyre print image information of the vehicle collision site to the image recognition module for image recognition, wherein the road surface tyre print image information is classified to obtain at least two types of tyre print image information;
the first computing subunit is used for carrying out self-adaptive gradient update on the classification model by adopting SGD computing gradient, and sending the classified tire mark image information to the updated classification model for iterative processing, wherein the iterative processing step is to carry out feature engineering processing based on the classified tire mark image, determine feature vectors required by iterative training, and carry out iterative processing on the feature vectors by adopting an AUC algorithm to obtain an AUC value reaching the maximum iterative times;
the second calculating subunit is used for calculating the weight value of each tire print image information accounting for all road surface tire print image information, comparing the classified weight value of each tire print image information with the AUC value reaching the maximum iteration number, taking the AUC value of the maximum iteration number as a threshold value, and comparing the AUC value with the classified first information after weight calculation to obtain the category information of all tire prints;
and the second processing subunit is used for sending the category information of all the tyre marks and the road surface residue image information to the driving direction determining module to determine the driving direction of the vehicle, so as to obtain the driving direction information of each collision vehicle.
7. The vehicle crash trace identification apparatus as set forth in claim 6, wherein the apparatus includes:
the third processing subunit is used for carrying out frame selection on all targets in the category information of all tire marks and the image information of the pavement residues, and carrying out key point identification on the targets selected by all frames to obtain key points of the tire marks and the key points of the pavement residues in each frame-selected target image;
a fourth processing subunit, configured to perform track fitting on the key points of the tire mark and the key points of the road surface residue in each frame target image by using a bezier curve, so as to obtain a motion curve of the tire mark and a motion curve of the road surface residue;
the first analysis subunit is used for carrying out association analysis on the motion curve of the tire mark and the motion curve of the road surface residue and preset model information of each collision vehicle respectively to obtain the motion curve of the tire mark and the motion curve of the road surface residue corresponding to each collision vehicle;
and the second analysis subunit is used for sending the motion curve of the tire mark and the motion curve of the pavement residue corresponding to each collision vehicle to the trained direction recognition model for analysis, so as to obtain the corresponding driving direction information of each collision vehicle.
8. The vehicle crash trace identification apparatus as set forth in claim 7, wherein the apparatus includes:
the first acquisition subunit is used for acquiring a motion curve of the historical tire mark and a motion curve of the historical road surface residue, screening vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue, and carrying out direction calibration on the vehicle driving direction information corresponding to the motion curve of the historical tire mark and the motion curve of the historical road surface residue to obtain calibrated direction information;
a fifth processing subunit, configured to process the calibrated direction information based on a CART algorithm to obtain a CART decision tree, perform random pruning processing on the CART decision tree, and determine a constant of the CART decision tree to obtain at least one untrained sub decision tree;
and the sixth processing subunit is used for obtaining an optimal sub-decision tree based on the untrained sub-decision tree and the base index calculation method, obtaining the direction recognition model based on the optimal sub-decision tree, wherein the direction recognition model comprises the optimal sub-decision tree and a target constant corresponding to the optimal sub-decision tree.
CN202310433116.3A 2023-04-21 2023-04-21 Method and device for identifying collision trace of vehicle Active CN116189114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310433116.3A CN116189114B (en) 2023-04-21 2023-04-21 Method and device for identifying collision trace of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310433116.3A CN116189114B (en) 2023-04-21 2023-04-21 Method and device for identifying collision trace of vehicle

Publications (2)

Publication Number Publication Date
CN116189114A CN116189114A (en) 2023-05-30
CN116189114B true CN116189114B (en) 2023-07-14

Family

ID=86449196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310433116.3A Active CN116189114B (en) 2023-04-21 2023-04-21 Method and device for identifying collision trace of vehicle

Country Status (1)

Country Link
CN (1) CN116189114B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034258A (en) * 2011-01-10 2011-04-27 长安大学 Vehicle chain collision accident analytical computation and simulation reproduction system
CN115617217A (en) * 2022-11-23 2023-01-17 中国科学院心理研究所 Vehicle state display method, device, equipment and readable storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008305254A (en) * 2007-06-08 2008-12-18 Komutekku:Kk Drive recorder
CN102044090B (en) * 2010-12-30 2013-09-11 长安大学 Vehicle pileup accident analysis and simulation reconstruction computer system
US10163164B1 (en) * 2014-09-22 2018-12-25 State Farm Mutual Automobile Insurance Company Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval
CN104392328B (en) * 2014-12-03 2017-08-22 湖南大学 A kind of uncertainty assessment method of traffic traffic accident
KR102576697B1 (en) * 2016-04-01 2023-09-12 주식회사 에이치엘클레무브 Collision preventing apparatus and collision preventing method
CN111091591B (en) * 2019-12-23 2023-09-26 阿波罗智联(北京)科技有限公司 Collision detection method and device, electronic equipment and storage medium
CN111862607A (en) * 2020-07-22 2020-10-30 中国第一汽车股份有限公司 Responsibility division method, device, equipment and storage medium
CN112749210B (en) * 2021-01-18 2024-03-12 优必爱信息技术(北京)有限公司 Vehicle collision recognition method and system based on deep learning
CN112966352B (en) * 2021-03-10 2022-08-19 陕西蓝德智慧交通科技有限公司 System and method for rapidly calculating vehicle deformation collision energy in traffic accident
CN113538193B (en) * 2021-06-30 2024-07-16 南京云略软件科技有限公司 Traffic accident handling method and system based on artificial intelligence and computer vision
CN115797900B (en) * 2021-09-09 2023-06-27 廊坊和易生活网络科技股份有限公司 Vehicle-road gesture sensing method based on monocular vision
CN115774444B (en) * 2021-09-09 2023-07-25 廊坊和易生活网络科技股份有限公司 Path planning optimization method based on sparse navigation map

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034258A (en) * 2011-01-10 2011-04-27 长安大学 Vehicle chain collision accident analytical computation and simulation reproduction system
CN115617217A (en) * 2022-11-23 2023-01-17 中国科学院心理研究所 Vehicle state display method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN116189114A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN112816954B (en) Road side perception system evaluation method and system based on true value
CN106127747B (en) Car surface damage classifying method and device based on deep learning
CN109919072B (en) Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN105892471A (en) Automatic automobile driving method and device
Saisree et al. Pothole detection using deep learning classification method
CN116151506B (en) Weather-based method and device for determining real-time operation route of unmanned vehicle
CN108960055B (en) Lane line detection method based on local line segment mode characteristics
WO2021013190A1 (en) Meteorological parameter-based high-speed train positioning method and system in navigation blind zone
CN116359218B (en) Industrial aggregation area atmospheric pollution mobile monitoring system
CN116168356B (en) Vehicle damage judging method based on computer vision
CN115398062A (en) Method for generating learned model and road surface feature determination device
CN110793501A (en) Subway tunnel clearance detection method
CN110567383A (en) pantograph abrasion early warning system and detection method based on structural forest and sub-pixels
CN113841152A (en) Method, data processing device and computer program product for determining a road intersection
CN114495421B (en) Intelligent open type road construction operation monitoring and early warning method and system
CN110210326B (en) Online train identification and speed estimation method based on optical fiber vibration signals
CN116189114B (en) Method and device for identifying collision trace of vehicle
CN105335758A (en) Model identification method based on video Fisher vector descriptors
CN113392695B (en) Highway truck and wheel axle identification method thereof
CN115205549A (en) SLAM method based on mutual information and semantic segmentation
Chavan et al. An Overview of Machine Learning Techniques for Evaluation of Pavement Condition
CN118182572B (en) Anti-collision early warning device for railway mobile equipment
CN117115759B (en) Road side traffic target detection system and method based on category guidance
CN116630900B (en) Passenger station passenger streamline identification method, system and equipment based on machine learning
CN117894011A (en) Point cloud-based method for detecting foreign matters at bottom of rail transit vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant