CN116152637A - Evaluation method of automatic driving perception model, computer equipment and storage medium - Google Patents
Evaluation method of automatic driving perception model, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116152637A CN116152637A CN202310172817.6A CN202310172817A CN116152637A CN 116152637 A CN116152637 A CN 116152637A CN 202310172817 A CN202310172817 A CN 202310172817A CN 116152637 A CN116152637 A CN 116152637A
- Authority
- CN
- China
- Prior art keywords
- data
- perception
- evaluated
- vehicle
- environment information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008447 perception Effects 0.000 title claims abstract description 222
- 238000011156 evaluation Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 claims abstract description 61
- 238000004590 computer program Methods 0.000 claims abstract description 44
- 230000008569 process Effects 0.000 claims description 22
- 230000007613 environmental effect Effects 0.000 claims description 12
- 230000001953 sensory effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 20
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000011161 development Methods 0.000 abstract description 7
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000002243 precursor Substances 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to an evaluation method, an evaluation device, a computer device, a storage medium and a computer program product of an automatic driving perception model. According to the method, target environment information of a vehicle corresponding to-be-evaluated data is determined according to a target evaluation item, predicted environment information of the vehicle predicted after to-be-evaluated perception model perception processing is obtained, then matching processing is carried out on the predicted environment information and the target environment information of the vehicle, a matching result is obtained, and finally an evaluation result of the to-be-evaluated perception model is determined according to the matching result. When evaluating, the method can be used for determining the target environment information of the vehicle corresponding to the data to be evaluated according to different target evaluation projects, so that the method can be suitable for evaluating various evaluation projects, and compared with the traditional technology, the development cost of evaluation can be saved.
Description
Technical Field
The present application relates to the field of autopilot technology, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for evaluating an autopilot perception model.
Background
With the development of automatic driving technology, a perception model becomes one of core technologies of an automatic driving automobile, so that evaluation of the perception model is very important.
In the conventional technology, when evaluating a perception model, aiming at different evaluation items, such as algorithm evaluation, regression evaluation, version comparison evaluation, end cloud consistency evaluation, data self-consistency evaluation and the like, different perception modules need to develop a plurality of scripts or evaluation programs for corresponding evaluation respectively, and each script or program has no reusability, so that the development cost of evaluation investment is high.
Disclosure of Invention
In view of the above, it is necessary to provide an evaluation method, an evaluation apparatus, a computer device, a computer-readable storage medium, and a computer program product for an automatic driving perception model, which can save the evaluation development cost, in order to solve the above-described problem of high development cost.
In a first aspect, the present application provides a method for evaluating an autopilot perception model. The method comprises the following steps:
acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of a vehicle;
aiming at a target evaluation item, determining target environment information of a vehicle corresponding to the data to be evaluated;
acquiring predicted environment information of a predicted vehicle after the to-be-evaluated sensing model senses the to-be-evaluated data;
matching the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and determining an evaluation result of the perception model to be evaluated according to the matching result.
In one embodiment, the acquiring the data to be evaluated includes: acquiring sensing data continuously acquired in the running process of a vehicle; dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments; and taking the plurality of perception data segments as the data to be evaluated.
In one embodiment, the perceptual data segment comprises a plurality of frames; after the plurality of segmented perception data segments are obtained, the method further comprises: identifying a perception object of each frame in the perception data segment; and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
In one embodiment, the taking the plurality of perception data segments as the data to be evaluated includes: establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames; and taking the perception data of the tree structure as the data to be evaluated.
In one embodiment, after the creating the tree structure of the perceptual data, the method further comprises: and storing the perception data of the tree structure.
In one embodiment, the obtaining the predicted environment information of the vehicle predicted after the to-be-evaluated sensing model senses the to-be-evaluated data includes: and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
In one embodiment, the matching processing of the predicted environment information and the target environment information of the vehicle to obtain a matching result includes: respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment; and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
In a second aspect, the application further provides an evaluation device of the autopilot perception model. The device comprises:
the data acquisition module is used for acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of the vehicle;
the target information determining module is used for determining target environment information of the vehicle corresponding to the data to be evaluated according to a target evaluating project;
the prediction information acquisition module is used for acquiring prediction environment information of a vehicle predicted after the to-be-evaluated sensing model senses the to-be-evaluated data;
the matching module is used for carrying out matching processing on the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and the evaluation result determining module is used for determining the evaluation result of the perception model to be evaluated according to the matching result.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method according to the first aspect above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect above.
According to the evaluation method, the device, the computer equipment, the storage medium and the computer program product of the automatic driving perception model, the data to be evaluated is obtained, the target environment information of the vehicle corresponding to the data to be evaluated is determined aiming at the target evaluation project, the predicted environment information of the vehicle predicted after the perception processing of the data to be evaluated is carried out on the data to be evaluated by the perception model to be evaluated is obtained, then the predicted environment information of the vehicle and the target environment information are matched to obtain a matching result, and finally the evaluation result of the perception model to be evaluated is determined according to the matching result. When evaluating, the method can be used for determining the target environment information of the vehicle corresponding to the data to be evaluated according to different target evaluation projects, so that the method can be suitable for evaluating various evaluation projects, and compared with the traditional technology, the development cost of evaluation can be saved.
Drawings
FIG. 1 is a flow chart of a method for evaluating an autopilot awareness model in one embodiment;
FIG. 2 is a flowchart illustrating steps for obtaining perceptual data in one embodiment;
FIG. 3 is a flow chart illustrating the steps of perceptual data pre-processing in one embodiment;
FIG. 4 is a flowchart illustrating a step of preprocessing the perception data in another embodiment;
FIG. 5 is a schematic diagram of a tree structure of perceptual data in one embodiment;
FIG. 6 is a flow diagram of the matching process steps in one embodiment;
FIG. 7 is a block diagram of an evaluation device of an autopilot awareness model in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an evaluation method of an autopilot perception model is provided, where the embodiment is illustrated by using the method applied to a computer device as an example, and specifically may include the following steps:
and 102, acquiring data to be evaluated.
The data to be evaluated comprises sensing data acquired in the running process of the vehicle. In particular, the sensory data includes, but is not limited to, sensor data recorded during historical travel of the vehicle, image data, and the like.
The perception model is a network learning model for identifying information such as the position, the size, the category, the speed and the like of the perception object in the automatic driving process of the vehicle. In this embodiment, when the perception model needs to be evaluated, the computer device needs to acquire the data to be evaluated first.
And 104, determining target environment information of the vehicle corresponding to the data to be evaluated according to the target evaluation item.
The target evaluation item may be a test item that needs to evaluate the perception model at present, including but not limited to algorithm evaluation, regression evaluation, version comparison evaluation, end cloud consistency evaluation, data self-consistency evaluation, and the like of the perception model. The target environment information is truth data which corresponds to the target evaluation item and the data to be evaluated and is used for evaluating the perception model. For example, if the target evaluation item is an algorithm evaluation on the perception model, the target environmental information may be label information such as a position, a size, a category, and a speed of the perception object marked based on the data to be evaluated; if the target evaluation item is the version comparison evaluation of the perception model, the target environment information can be information such as the position, the size, the category, the speed and the like of the perception object in the environment where the vehicle is positioned and output after the perception processing of the data to be evaluated is performed based on the perception model of the version to be compared. Therefore, the target environment information of the vehicle corresponding to the data to be evaluated is different for different target evaluation projects.
In this embodiment, the computer device may determine, for the target evaluation item, target environmental information of the vehicle corresponding to the data to be evaluated.
And 106, obtaining predicted environment information of the predicted vehicle after the to-be-evaluated perception model carries out perception processing on the to-be-evaluated data.
The to-be-evaluated perception model can be a perception model which needs to be evaluated at present. The predicted environment information can be the to-be-evaluated perception model, and the relevant information of the environment of the vehicle, which is output after the to-be-evaluated data is subjected to the perception processing, includes but is not limited to the predicted information of the position, the size, the category, the speed and the like of the perceived object in the recognized environment of the vehicle. Specifically, the computer device may obtain predicted environmental information of the vehicle predicted after the to-be-evaluated perception model carries out the perception processing on the to-be-evaluated data.
And step 108, matching the predicted environment information and the target environment information of the vehicle to obtain a matching result.
The matching process may be a process of comparing each index in the predicted environment information and the target environment information of the vehicle, and the matching result may be a result of determining whether the indexes are matched after the comparison process. Specifically, each index may be a corresponding perception object and an index such as a position, a size, a category, and a speed of each perception object.
In this embodiment, the computer device performs matching processing on the predicted environment information and the target environment information of the vehicle, so as to obtain a corresponding matching result.
And 110, determining an evaluation result of the perception model to be evaluated according to the matching result.
The evaluation result refers to a result which is output to represent the performance of the to-be-evaluated perception model after the to-be-evaluated perception model is evaluated, and includes, for example, but not limited to, accuracy, recall, error distribution and the like. Specifically, the computer device may determine an evaluation result of the to-be-evaluated perceptual model according to the obtained matching result.
In the method for evaluating the automatic driving perception model, the to-be-evaluated data is obtained, the target environment information of the vehicle corresponding to the to-be-evaluated data is determined according to the target evaluation project, the predicted environment information of the vehicle predicted after the to-be-evaluated perception model carries out perception processing on the to-be-evaluated data is obtained, then the predicted environment information of the vehicle and the target environment information are matched to obtain a matching result, and finally the evaluation result of the to-be-evaluated perception model is determined according to the matching result. When evaluating, the method can be used for determining the target environment information of the vehicle corresponding to the data to be evaluated according to different target evaluation projects, so that the method can be suitable for evaluating various evaluation projects, and compared with the traditional technology, the development cost of evaluation can be saved.
In one embodiment, as shown in fig. 2, in step 102, obtaining the data to be evaluated may specifically include:
The sensing data is sensor data, image data and the like recorded in the history running process of the vehicle, is usually continuously acquired data, and has the characteristic of large data volume.
And 204, segmenting the perception data according to the set time interval to obtain a plurality of segmented perception data segments.
The set time interval may be a preset time interval for slicing the sensing data, for example, may be 5 minutes, 3 minutes, or the like, that is, the sensing data is sliced once every 5 minutes or 3 minutes, so as to obtain a plurality of sliced sensing data segments. Because the sensing data is continuously collected, the data volume is large, and the processing time is long. In this embodiment, the computer device may segment the sensing data according to the set time interval, so as to obtain a plurality of segmented sensing data segments, and further may perform parallel evaluation according to the plurality of sensing data segments, so as to improve the evaluation efficiency.
And 206, taking the plurality of perception data segments as data to be evaluated.
Specifically, the computer device may use the segmented multiple pieces of perceptual data as the data to be evaluated, that is, the data to be evaluated includes multiple pieces of perceptual data.
In the above embodiment, the computer device acquires the sensing data continuously acquired during the running process of the vehicle, segments the sensing data according to the set time interval to obtain a plurality of segmented sensing data segments, and uses the plurality of sensing data segments as the data to be evaluated, so that parallel evaluation can be performed, and the evaluation efficiency can be improved.
In one embodiment, each segment of perceptual data may comprise a plurality of frames. Then, as shown in fig. 3, after obtaining the segmented plurality of perceived data segments in step 204, the method may further include:
at step 302, a percept of each frame in a segment of perceptual data is identified.
The perception object may be all objects in a frame, for example, pedestrians, vehicles, obstacles, traffic lights, etc. In this embodiment, the computer device may identify a percept object for each frame in the segment of perceptual data.
Since the sensing data segments are obtained by slicing continuously acquired sensing data based on a certain time interval, a plurality of frames in each sensing data segment have a certain time sequence continuity. Because the sensing object of each frame in the sensing data segment is already identified in the steps, the time sequence track of the same sensing object under different frames can be determined based on the time sequence continuity of a plurality of frames in the sensing data segment and the sensing object of each frame.
In this embodiment, the computer device identifies the perception object of each frame in the perception data segment, and determines the time sequence track of the same perception object under different frames according to the perception object of each frame, so as to perform evaluation, and thus, the evaluation efficiency can be improved.
In one embodiment, as shown in fig. 4, in step 206, using the plurality of sensing data segments as the data to be evaluated may specifically include:
step 402, a tree structure of the perception data is established according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame, and a time sequence track of the same perception object under different frames.
The tree structure can represent a hierarchical relationship, and a tree relationship of 'one-to-many' exists among data elements, so that the tree structure is an important nonlinear data structure. In the tree structure, the tree root node has no precursor node, and each other node has only one precursor node. The leaf nodes have no subsequent nodes, and the number of the subsequent nodes of each node can be one or a plurality of the subsequent nodes. In this embodiment, the tree structure established by taking the sensing data as the tree root node is shown in fig. 5, in the tree structure shown in fig. 5, the sensing data Dataset is the tree root node, the sensing data segment Clip is a child node of the sensing data Dataset, the Frame is a child node of the sensing data segment Clip, the sensing Object is a child node of the Frame, and the sensing Object is also a leaf node of the tree. Specifically, the sensing data Dataset may include a plurality of sensing data segments Clip, each sensing data segment Clip corresponds to a plurality of frames, and each Frame may further include a plurality of sensing objects Object. Therefore, the timing track trajectry of the same Object in different frames in the perceived data segment Clip can also be determined. The general data structure, namely the tree structure of the perception data, is formed, so that the method can be used for various evaluation items and evaluating the perception model.
And step 404, taking the perception data of the tree structure as the data to be evaluated.
Specifically, the computer device uses the perception data of the established tree structure as the data to be evaluated, so that the computer device can be commonly used for various evaluation items, and the evaluation of the various evaluation items is performed for the perception model.
In one embodiment, after establishing the tree structure of the perceptual data in step 402, the method may further include: the perception data of the tree structure is stored. In the embodiment, the perception data of the tree structure is stored, so that the perception data can be multiplexed in other evaluation projects to evaluate the perception model, the universality of the data can be realized, and the evaluation efficiency of the perception model in other evaluation projects can be improved.
In one embodiment, in step 106, the obtaining the predicted environment information of the vehicle predicted after the to-be-evaluated perception model carries out the perception processing on the to-be-evaluated data may specifically further include: and inputting the perception data segments into a perception model to be evaluated aiming at each perception data segment in the data to be evaluated, and obtaining the predicted environment information of the vehicle predicted by the perception model to be evaluated and aiming at each perception data segment. And performing perception processing on each perception data segment in the to-be-evaluated data through the to-be-evaluated perception model, so as to obtain the predicted environment information of the vehicle predicted based on the perception data segment.
In one scenario, the to-be-evaluated data also has corresponding target environment information for different evaluation items, and the to-be-evaluated data also has target environment information corresponding to each perception data segment in the to-be-evaluated data for different evaluation items, so that subsequent matching processing is facilitated.
In one embodiment, as shown in fig. 6, in step 108, the matching process is performed on the predicted environmental information and the target environmental information of the vehicle to obtain a matching result, and specifically the method may further include the following steps:
step 602, the predicted environment information of the vehicle predicted for each perceived data segment is respectively matched with the corresponding target environment information, so as to obtain the matching degree for each perceived data segment.
The matching degree may be a degree of matching between predicted environmental information of the vehicle predicted for each of the pieces of perceived data and corresponding target environmental information.
Specifically, if the predicted environment information of the vehicle predicted by the model to be evaluated for a certain perceived data segment includes a predicted perceived object a, and a position AP, a size AS, a category AK, and a speed AV of the perceived object a. If the target evaluation item is the algorithm evaluation of the perception model to be evaluated, the target environment information corresponding to the perception data segment is label information such as the position, the size, the category, the speed and the like of the marked perception object, and if the position, the size, the category and the speed of the marked perception object B are BP, BS, BK and BV in the embodiment. The predicted environment information of the vehicle predicted for each perceived data segment is matched with the corresponding target environment information, specifically, the predicted perceived object a and the marked perceived object B may be matched, for example, the object similarity AB between the perceived object a and the perceived object B may be calculated, the category similarity ABK between the categories AK and BK may be determined, the position fit ABP between the positions AP and BP may be determined, the size fit ABs between the size AS and BS may be determined, the speed fit ABV between the speed AV and BV may be determined, and the weighted processing may be performed according to the weights of the indexes, so AS to obtain the matching degree for the perceived data segment. For example, if the object similarity AB is weighted l, the category similarity ABK is weighted m, the position fit ABP is weighted n, the size fit ABs is weighted r, and the speed fit ABV is weighted o, the matching degree f=ab×l+ ABK ×m+abp×n+abs×r+abv×o of the perceived data segment. Wherein the sum of l, m, n, r and o is 1.
It will be appreciated that the matching manner may be different for different evaluation items, and this embodiment is not limited thereto.
And step 604, counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
Specifically, the matching degree of each perception data segment in the data to be evaluated is subjected to distributed statistics, so that a matching result of the data to be evaluated is obtained. And further, the evaluation result of the perception model to be evaluated can be determined according to the matching result, so that the general evaluation of the perception model is realized.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an evaluation device for realizing the evaluation method of the automatic driving perception model. The implementation scheme of the solution provided by the device is similar to the implementation scheme described in the above method, so the specific limitation in the embodiment of the evaluation device of one or more autopilot perception models provided below may be referred to the limitation of the evaluation method of the autopilot perception model hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided an apparatus for evaluating an autopilot perception model, including: a data acquisition module 702, a target information determination module 704, a prediction information acquisition module 706, a matching module 708, and an evaluation result determination module 710, wherein:
the data acquisition module 702 is configured to acquire data to be evaluated, where the data to be evaluated includes sensing data acquired during a vehicle driving process;
the target information determining module 704 is configured to determine, for a target evaluation item, target environmental information of a vehicle corresponding to the data to be evaluated;
the prediction information obtaining module 706 is configured to obtain predicted environmental information of a vehicle predicted after the to-be-evaluated sensing model senses the to-be-evaluated data;
a matching module 708, configured to perform matching processing on the predicted environment information and the target environment information of the vehicle, so as to obtain a matching result;
and the evaluation result determining module 710 is configured to determine an evaluation result of the to-be-evaluated perceptual model according to the matching result.
In one embodiment, the data acquisition module is further configured to: acquiring sensing data continuously acquired in the running process of a vehicle; dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments; and taking the plurality of perception data segments as the data to be evaluated.
In one embodiment, the perceptual data segment comprises a plurality of frames; the data acquisition module is further configured to: identifying a perception object of each frame in the perception data segment; and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
In one embodiment, the data acquisition module is further configured to: establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames; and taking the perception data of the tree structure as the data to be evaluated.
In one embodiment, the apparatus further comprises a storage module for storing the perception data of the tree structure.
In one embodiment, the prediction information acquisition module is further configured to: and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
In one embodiment, the matching module is further configured to: respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment; and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
All or part of each module in the evaluation device of the automatic driving perception model can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by the processor, implements a method for evaluating an autopilot awareness model. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of a vehicle;
aiming at a target evaluation item, determining target environment information of a vehicle corresponding to the data to be evaluated;
acquiring predicted environment information of a predicted vehicle after the to-be-evaluated sensing model senses the to-be-evaluated data;
matching the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and determining an evaluation result of the perception model to be evaluated according to the matching result.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring sensing data continuously acquired in the running process of a vehicle; dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments; and taking the plurality of perception data segments as the data to be evaluated.
In one embodiment, the perceptual data segment comprises a plurality of frames; the processor when executing the computer program also implements the steps of: identifying a perception object of each frame in the perception data segment; and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
In one embodiment, the processor when executing the computer program further performs the steps of: establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames; and taking the perception data of the tree structure as the data to be evaluated.
In one embodiment, the processor when executing the computer program further performs the steps of: and storing the perception data of the tree structure.
In one embodiment, the processor when executing the computer program further performs the steps of: and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
In one embodiment, the processor when executing the computer program further performs the steps of: respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment; and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of a vehicle;
aiming at a target evaluation item, determining target environment information of a vehicle corresponding to the data to be evaluated;
acquiring predicted environment information of a predicted vehicle after the to-be-evaluated sensing model senses the to-be-evaluated data;
matching the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and determining an evaluation result of the perception model to be evaluated according to the matching result.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring sensing data continuously acquired in the running process of a vehicle; dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments; and taking the plurality of perception data segments as the data to be evaluated.
In one embodiment, the perceptual data segment comprises a plurality of frames; the computer program when executed by the processor also performs the steps of: identifying a perception object of each frame in the perception data segment; and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames; and taking the perception data of the tree structure as the data to be evaluated.
In one embodiment, the computer program when executed by the processor further performs the steps of: and storing the perception data of the tree structure.
In one embodiment, the computer program when executed by the processor further performs the steps of: and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
In one embodiment, the computer program when executed by the processor further performs the steps of: respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment; and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of a vehicle;
aiming at a target evaluation item, determining target environment information of a vehicle corresponding to the data to be evaluated;
acquiring predicted environment information of a predicted vehicle after the to-be-evaluated sensing model senses the to-be-evaluated data;
matching the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and determining an evaluation result of the perception model to be evaluated according to the matching result.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring sensing data continuously acquired in the running process of a vehicle; dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments; and taking the plurality of perception data segments as the data to be evaluated.
In one embodiment, the perceptual data segment comprises a plurality of frames; the computer program when executed by the processor also performs the steps of: identifying a perception object of each frame in the perception data segment; and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames; and taking the perception data of the tree structure as the data to be evaluated.
In one embodiment, the computer program when executed by the processor further performs the steps of: and storing the perception data of the tree structure.
In one embodiment, the computer program when executed by the processor further performs the steps of: and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
In one embodiment, the computer program when executed by the processor further performs the steps of: respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment; and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. An evaluation method of an autopilot perception model, characterized in that the method comprises:
acquiring data to be evaluated, wherein the data to be evaluated comprises sensing data acquired in the running process of a vehicle;
aiming at a target evaluation item, determining target environment information of a vehicle corresponding to the data to be evaluated;
acquiring predicted environment information of a predicted vehicle after the to-be-evaluated sensing model senses the to-be-evaluated data;
matching the predicted environment information and the target environment information of the vehicle to obtain a matching result;
and determining an evaluation result of the perception model to be evaluated according to the matching result.
2. The method according to claim 1, wherein the obtaining the data to be evaluated comprises:
acquiring sensing data continuously acquired in the running process of a vehicle;
dividing the perception data according to a set time interval to obtain a plurality of divided perception data segments;
and taking the plurality of perception data segments as the data to be evaluated.
3. The method of claim 2, wherein the perceptual data segment comprises a plurality of frames; after the plurality of segmented perception data segments are obtained, the method further comprises:
identifying a perception object of each frame in the perception data segment;
and determining the time sequence track of the same perceived object under different frames according to the perceived object of each frame.
4. A method according to claim 3, wherein said taking said plurality of segments of sensory data as said data to be evaluated comprises:
establishing a tree structure of the perception data according to a plurality of perception data segments of the perception data, a plurality of frames corresponding to each perception data segment, a perception object of each frame and time sequence tracks of the same perception object under different frames;
and taking the perception data of the tree structure as the data to be evaluated.
5. The method of claim 4, wherein after the creating the tree structure of the perceptual data, the method further comprises:
and storing the perception data of the tree structure.
6. The method according to any one of claims 2 to 5, wherein the obtaining the predicted environmental information of the vehicle predicted after the to-be-evaluated sensing model senses the to-be-evaluated data includes:
and inputting the perception data segments into the perception model to be evaluated according to each perception data segment in the data to be evaluated, and obtaining the prediction environment information of the vehicle predicted according to each perception data segment output by the perception model to be evaluated.
7. The method according to claim 6, wherein the matching the predicted environmental information and the target environmental information of the vehicle to obtain a matching result includes:
respectively matching the predicted environment information of the vehicle predicted for each perceived data segment with corresponding target environment information to obtain the matching degree of each perceived data segment;
and counting the matching degree of each perception data segment in the data to be evaluated, and obtaining a matching result aiming at the data to be evaluated.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310172817.6A CN116152637A (en) | 2023-02-23 | 2023-02-23 | Evaluation method of automatic driving perception model, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310172817.6A CN116152637A (en) | 2023-02-23 | 2023-02-23 | Evaluation method of automatic driving perception model, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116152637A true CN116152637A (en) | 2023-05-23 |
Family
ID=86356121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310172817.6A Pending CN116152637A (en) | 2023-02-23 | 2023-02-23 | Evaluation method of automatic driving perception model, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116152637A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116500565A (en) * | 2023-06-28 | 2023-07-28 | 小米汽车科技有限公司 | Method, device and equipment for evaluating automatic driving perception detection capability |
CN119169590A (en) * | 2024-11-20 | 2024-12-20 | 福瑞泰克智能系统有限公司 | Perception model evaluation method and device, storage medium and electronic device |
-
2023
- 2023-02-23 CN CN202310172817.6A patent/CN116152637A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116500565A (en) * | 2023-06-28 | 2023-07-28 | 小米汽车科技有限公司 | Method, device and equipment for evaluating automatic driving perception detection capability |
CN116500565B (en) * | 2023-06-28 | 2023-10-13 | 小米汽车科技有限公司 | Method, device and equipment for evaluating automatic driving perception detection capability |
CN119169590A (en) * | 2024-11-20 | 2024-12-20 | 福瑞泰克智能系统有限公司 | Perception model evaluation method and device, storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210241107A1 (en) | Method, apparatus, device, and storage medium for training image semantic segmentation network | |
CN112613375B (en) | Tire damage detection and identification method and equipment | |
CN116152637A (en) | Evaluation method of automatic driving perception model, computer equipment and storage medium | |
CN116310656B (en) | Training sample determining method and device and computer equipment | |
CN111831894B (en) | Information matching method and device | |
CN115829172B (en) | Pollution prediction method, pollution prediction device, computer equipment and storage medium | |
CN117033039A (en) | Fault detection method, device, computer equipment and storage medium | |
CN116484920A (en) | Lightweight YOLOv5s network model training method, insulator defect detection method | |
TWI844873B (en) | Method for detecting product defects, electronic device and computer-readable storage medium | |
CN114611615A (en) | Object classification processing method and device, computer equipment and storage medium | |
CN112906824A (en) | Vehicle clustering method, system, device and storage medium | |
CN118968246B (en) | Fusion operator testing method, apparatus, computer device, storage medium and computer program product | |
CN117892910B (en) | Analysis method, analysis apparatus, computer device, storage medium, and program product | |
CN118069044A (en) | Chip data storage method, device, equipment, medium and product | |
CN116579669B (en) | Reliability assessment method, device, computer equipment and storage medium thereof | |
CN119807073A (en) | Test number generation method and device for multi-calculation factor combination scenario | |
CN117874530B (en) | Adversarial sample detection methods, devices, equipment, media and products | |
CN116302364B (en) | Automatic driving reliability test method, device, equipment, medium and program product | |
CN115601550B (en) | Model determination method, model determination device, computer equipment and computer readable storage medium | |
CN116205281A (en) | Network model quantization method, device, computer equipment and storage medium thereof | |
CN119295897A (en) | Image recognition method, device and storage medium based on deep learning | |
CN115130607A (en) | Object classification processing method, apparatus, computer equipment and storage medium | |
CN119722608A (en) | Product appearance defect detection method, device, computer equipment and storage medium | |
CN118722237A (en) | Battery abnormality warning method, device, equipment, readable storage medium and program product | |
CN119201589A (en) | Method, device and computer equipment for monitoring application program running status |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |