CN115588185B - Driving route generation method and device, electronic equipment and computer readable medium - Google Patents
Driving route generation method and device, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN115588185B CN115588185B CN202211422241.6A CN202211422241A CN115588185B CN 115588185 B CN115588185 B CN 115588185B CN 202211422241 A CN202211422241 A CN 202211422241A CN 115588185 B CN115588185 B CN 115588185B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- obstacle
- preset
- obstacle vehicle
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 239000013598 vector Substances 0.000 claims abstract description 144
- 239000011159 matrix material Substances 0.000 claims abstract description 65
- 238000001514 detection method Methods 0.000 claims description 32
- 230000004888 barrier function Effects 0.000 claims description 19
- 238000003062 neural network model Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 description 24
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0011—Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a driving route generation method, a driving route generation device, electronic equipment and a computer readable medium. One embodiment of the method comprises: determining each two-dimensional coordinate of each wheel grounding point of the obstacle vehicle according to a foreground image acquired by a front camera of the target vehicle; determining the three-dimensional coordinates of the grounding point of each wheel according to each two-dimensional coordinate; determining the course angle of the obstacle vehicle as an initial obstacle vehicle course angle; generating a unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and each three-dimensional coordinate; generating a heading angle of the obstacle vehicle according to the unit vector of the heading angle; generating a rotation matrix of the obstacle vehicle according to the course angle of the obstacle vehicle; generating a central coordinate of the obstacle vehicle according to the rotation matrix; generating a bounding box of the obstacle vehicle according to the central coordinate and the rotation matrix; based on the bounding box, a driving route is generated. According to the embodiment, the times that the running vehicle cannot safely avoid the obstacle vehicle are reduced, and the running safety is improved.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a driving route generation method and apparatus, an electronic device, and a computer-readable medium.
Background
The course angle of the obstacle vehicle can represent the driving posture of the obstacle vehicle, and provides a basis for the driving route of the vehicle. Currently, when determining the heading angle of an obstacle vehicle, the following is generally adopted: and determining the heading angle of the obstacle vehicle through a pre-trained neural network model.
However, the inventors have found that when determining the heading angle of an obstacle vehicle in the above manner, there are often technical problems as follows:
firstly, when the training set for training the neural network model is less, the accuracy of the course angle of the obstacle vehicle determined by the neural network model is lower, so that the times that the running vehicle cannot safely avoid the obstacle vehicle are more, and the safety is poorer.
Secondly, when the obstacle vehicle is seriously shielded, the accuracy of the position of the obstacle vehicle generated through the neural network model is low, so that the times that the running vehicle cannot safely avoid the obstacle vehicle are more, and the safety is poor.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a driving route generation method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems set forth in the background section above.
In a first aspect, some embodiments of the present disclosure provide a driving route generation method, including: determining each two-dimensional coordinate of each wheel grounding point of the obstacle vehicle according to a foreground image acquired by a front camera of the target vehicle; determining three-dimensional coordinates of the wheel contact points in a vehicle coordinate system of the target vehicle based on the two-dimensional coordinates; determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle; generating a unit vector of the course angle of the obstacle vehicle according to the course angle of the initial obstacle vehicle and the three-dimensional coordinates; generating a course angle of the obstacle vehicle according to the unit vector of the course angle of the obstacle vehicle; generating a rotation matrix of the obstacle vehicle under the vehicle coordinate system according to the heading angle of the obstacle vehicle; generating center coordinates of the obstacle vehicle in the vehicle coordinate system based on the rotation matrix; generating a bounding box of the obstacle vehicle according to the central coordinate and the rotation matrix; and generating a driving route of the target vehicle based on the bounding box.
In a second aspect, some embodiments of the present disclosure provide a driving route generation apparatus, including: a first determination unit configured to determine respective two-dimensional coordinates of respective wheel grounding points of the obstacle vehicle, based on a foreground image acquired by a front camera of the target vehicle; a second determining unit configured to determine, based on the respective two-dimensional coordinates, respective three-dimensional coordinates of the respective wheel contact points in a vehicle coordinate system of the target vehicle; a third determination unit configured to determine a heading angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle heading angle; a first generating unit configured to generate a unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the respective three-dimensional coordinates; a second generating unit configured to generate a course angle of the obstacle vehicle according to the unit vector of the course angle of the obstacle vehicle; a third generating unit configured to generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the obstacle vehicle heading angle; a fourth generation unit configured to generate center coordinates of the obstacle vehicle in the vehicle coordinate system based on the rotation matrix; a fifth generation unit configured to generate a bounding box of the obstacle vehicle based on the center coordinates and the rotation matrix; a sixth generation unit configured to generate a travel route of the target vehicle based on the bounding box.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device, on which one or more programs are stored, which when executed by one or more processors cause the one or more processors to implement the method described in any implementation of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by means of the driving route generation method of some embodiments of the disclosure, the number of times that the driving vehicle cannot safely avoid the obstacle vehicle is reduced, and driving safety is improved. Specifically, the number of times the traveling vehicle cannot safely avoid the obstacle vehicle is large, and the reason for the poor safety is that: when the training set for training the neural network model is less, the accuracy of the course angle of the obstacle vehicle determined by the neural network model is lower, so that the times that the running vehicle cannot safely avoid the obstacle vehicle are more, and the safety is poorer. Based on this, the driving route generation method of some embodiments of the present disclosure first determines respective two-dimensional coordinates of respective wheel grounding points of the obstacle vehicle from a foreground image acquired by a front camera of the target vehicle. Thereby, the coordinates of the respective wheel contact points in the foreground image can be determined. Then, three-dimensional coordinates of the wheel grounding points in a vehicle coordinate system of the target vehicle are determined based on the two-dimensional coordinates. Thereby, respective three-dimensional coordinates of respective wheel contact points can be obtained. And then, determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle. Thus, the initial obstacle vehicle heading angle can be obtained, and can be used for generating the obstacle vehicle heading angle unit vector. And secondly, generating a unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the three-dimensional coordinates. Therefore, the unit vector of the heading angle of the obstacle vehicle can be obtained, and the accuracy of the heading angle of the obstacle vehicle can be improved. And then, generating the course angle of the obstacle vehicle according to the unit vector of the course angle of the obstacle vehicle. Therefore, the heading angle of the obstacle vehicle can be obtained, and the accuracy of the heading angle of the obstacle vehicle is improved. And then, generating a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the heading angle of the obstacle vehicle. Thus, the rotation matrix may characterize the rotational pose of the obstacle vehicle. Next, center coordinates of the obstacle vehicle in the vehicle coordinate system are generated based on the rotation matrix. Thus, the center coordinates may represent the coordinate position of the obstacle vehicle. Next, a bounding box of the obstacle vehicle is generated based on the center coordinates and the rotation matrix. Thus, the bounding box may characterize the rotational direction and rotational attitude of the obstacle vehicle. Finally, a travel route of the target vehicle is generated based on the bounding box. Thus, the travel route can be referred to as a route when the target vehicle travels. The course angle of the obstacle vehicle is not determined directly through the neural network model, but is generated through three-dimensional coordinates of grounding points of wheels of the obstacle vehicle and the initial obstacle vehicle course angle detected by the target detection model, and therefore the accuracy of the course angle of the obstacle vehicle is improved. And because the rotation matrix of the obstacle vehicle and the center coordinates of the obstacle vehicle are determined, a bounding box of the obstacle vehicle is further generated, and the position and the posture information of the obstacle vehicle are further determined. Therefore, the running route of the target vehicle is generated, the frequency that the running vehicle cannot safely avoid the obstacle vehicle can be further reduced through the generated running route, and the running safety is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a travel route generation method according to the present disclosure;
FIG. 2 is a schematic view of respective wheel grounding points of an obstacle vehicle in accordance with some embodiments of a travel route generation method of the present disclosure;
FIG. 3 is a schematic block diagram of some embodiments of a travel route generation apparatus according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a travel route generation method according to the present disclosure. The driving route generation method comprises the following steps:
In some embodiments, an executing subject (e.g., a computing device) of the travel route generation method may determine respective two-dimensional coordinates of respective wheel grounding points of the obstacle vehicle from a foreground image acquired by a front camera of the target vehicle. The target vehicle may be a currently traveling unmanned vehicle. The foreground image may be a photographed image of an obstacle vehicle. The respective wheel contact points may be contact points of four wheels of the obstacle vehicle with the ground. The respective two-dimensional coordinates may be respective two-dimensional coordinates of respective wheel contact points in the foreground image.
In practice, the executing body may input the foreground image to a wheel grounding point detection model trained in advance to obtain two-dimensional coordinates of each wheel grounding point. The wheel contact point detection model may be a neural network model in which a vehicle image is used as input data and two-dimensional coordinates of the recognized wheel contact point are used as output data. For example, the neural network model may be a convolutional neural network model. Thereby, respective two-dimensional coordinates of the respective wheel grounding points can be obtained, which can be used to generate respective three-dimensional coordinates of the respective wheel grounding points.
The execution subject may be a vehicle-mounted terminal of the target vehicle, or may be a server that is connected to the vehicle-mounted terminal of the target vehicle by communication.
And 102, determining three-dimensional coordinates of the grounding points of the wheels under the vehicle coordinate system of the target vehicle according to the two-dimensional coordinates.
In some embodiments, the execution body may determine respective three-dimensional coordinates of the respective wheel grounding points in the vehicle coordinate system of the target vehicle from the respective two-dimensional coordinates. The vehicle coordinate system of the target vehicle may be a vehicle coordinate system of a currently traveling unmanned vehicle. In practice, the executing body may convert the two-dimensional coordinates into the vehicle coordinate system to obtain three-dimensional coordinates. Thereby, respective three-dimensional coordinates of respective wheel grounding points are obtained, and thus, can be used to generate a unit vector of the heading angle of the obstacle vehicle.
And 103, determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle.
In some embodiments, the executing body may determine a heading angle of the obstacle vehicle detected by the object detection model as an initial obstacle vehicle heading angle. The target detection model may be a detection model for detecting a heading angle of the obstacle vehicle. The target detection model may be, but is not limited to, one of the following: yolo target detection model, centrornet target detection model. Thus, an initial obstacle vehicle heading angle of the obstacle vehicle may be obtained, and thus, may be used to generate the obstacle vehicle heading angle unit vector.
And 104, generating a unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and each three-dimensional coordinate.
In some embodiments, the execution subject may generate the unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the respective three-dimensional coordinates.
In practice, the execution subject may generate the unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the three-dimensional coordinates in various ways.
In some optional implementations of some embodiments, the executing agent may generate the unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the respective three-dimensional coordinates by:
firstly, the course angle of the initial obstacle vehicle and a preset obstacle vehicle are usedAnd inputting the unit vector of the vehicle course angle into a preset course angle constraint function to obtain the initial obstacle vehicle course angle constraint. The unit vector of the preset course angle of the obstacle vehicle can be the unit vector of the course angle of the obstacle vehicle. The preset course angle constraint function may be a constraint function for constraining the unit vector of the course angle of the preset obstacle vehicle and the course angle of the initial obstacle vehicle. For example, the predetermined course angle constraint function may beAnd. Wherein,the initial obstacle vehicle heading angle of the current frame detected by the above-mentioned target detection model,the initial obstacle vehicle heading angle of the last frame detected by the target detection model may be used.The unit vector may be the above-mentioned predetermined obstacle vehicle heading angle. Wherein,may be an obstacle vehicle heading angle. Here, the unit vector of the heading angle of the preset obstacle vehicle may be regarded as an unknown number. Course angle of obstacle vehicleAs well as unknown numbers.
And secondly, executing the following steps for every two adjacent three-dimensional coordinates of the horizontal axis in each three-dimensional coordinate:
the first sub-step, make up two three-dimensional coordinates into the three-dimensional coordinate vector. As shown in fig. 2, a three-dimensional coordinate vector composed of two three-dimensional coordinates adjacent to each other on the horizontal axis may be formedIncludedAnd。a three-dimensional coordinate vector may be formed from the three-dimensional coordinates of the ground point of the wheel No. 0 to the three-dimensional coordinates of the ground point of the wheel No. 1.A three-dimensional coordinate vector may be formed from the three-dimensional coordinates of the grounding point of the wheel No. 3 to the three-dimensional coordinates of the grounding point of the wheel No. 2.
And a second substep, inputting the formed three-dimensional coordinate vector and the unit vector of the preset obstacle vehicle course angle into a preset cross-axis three-dimensional coordinate vector constraint function to obtain cross-axis three-dimensional coordinate vector constraint. The predetermined cross-axis three-dimensional coordinate vector constraint function may be a constraint function for a unit vector of a heading angle of the predetermined obstacle vehicle. For example, the predetermined cross-axis three-dimensional coordinate vector constraint function may beAnd. vec () may be a vector error correction model. And multiplying the three-dimensional coordinate vector in the preset cross-axis three-dimensional coordinate vector constraint function by the unit vector of the preset obstacle vehicle course angle to obtain 0, wherein the unit vector of the three-dimensional coordinate vector and the unit vector of the preset obstacle vehicle course angle can be represented to be vertical.
Thirdly, for every two adjacent three-dimensional coordinates of the longitudinal axes in the three-dimensional coordinates, executing the following steps:
the first sub-step, make up two three-dimensional coordinates into the three-dimensional coordinate vector. As shown in fig. 2, a three-dimensional coordinate vector composed of two three-dimensional coordinates adjacent to each other on the longitudinal axis may includeAnd。the three-dimensional coordinate vector can be formed by three-dimensional coordinates of the grounding point of the No. 3 wheel to three-dimensional coordinates of the grounding point of the No. 0 wheel.The three-dimensional coordinate vector can be formed by three-dimensional coordinates of the grounding point of the No. 2 wheel to three-dimensional coordinates of the grounding point of the No. 1 wheel.
And a second substep, inputting the formed three-dimensional coordinate vector and the unit vector of the preset obstacle vehicle course angle into a preset longitudinal axis three-dimensional coordinate vector constraint function to obtain longitudinal axis three-dimensional coordinate vector constraint. The predetermined longitudinal three-dimensional coordinate vector constraint function may be a constraint function for a unit vector of a heading angle of the predetermined obstacle vehicle. For example, the predetermined vertical axis three-dimensional coordinate vector constraint function may beAnd. And multiplying the three-dimensional coordinate vector in the preset longitudinal axis three-dimensional coordinate vector constraint function by the unit vector of the preset obstacle vehicle course angle to obtain 1, wherein the coincidence of the three-dimensional coordinate vector and the unit vector of the preset obstacle vehicle course angle can be represented.
And fourthly, establishing course angle unit vector constraints of the obstacle vehicles according to the initial obstacle vehicle course angle constraints and the obtained three-dimensional coordinate vector constraints of each transverse axis and the three-dimensional coordinate vector constraints of each longitudinal axis. The unit vector constraint of the heading angle of the obstacle vehicle can be a constraint established on the initial obstacle vehicle heading angle constraint, the obtained three-dimensional coordinate vector constraint of each transverse axis and the three-dimensional coordinate vector constraint of each longitudinal axis. In practice, the executing agent may combine the initial obstacle vehicle heading angle constraint and the horizontal axis three-dimensional coordinate vector constraints with the vertical axis three-dimensional coordinate vectors into a set of equations, and use the combined equations as obstacle vehicle heading angle unit vector constraints.
And fifthly, generating a unit vector of the heading angle of the obstacle vehicle according to the unit vector constraint of the heading angle of the obstacle vehicle. In practice, the executing body may use a least square method to solve the unit vector of the heading angle of the obstacle vehicle included in the unit vector constraint of the heading angle of the obstacle vehicle, so as to obtain the unit vector of the heading angle of the obstacle vehicle after the solution. Then, vector normalization processing can be carried out on the solved unit vector of the preset course angle of the obstacle vehicle, and the unit vector of the course angle of the obstacle vehicle is obtained. Thus, the unit vector of the heading angle of the obstacle vehicle can be used, and thus, the heading angle of the obstacle vehicle can be further generated.
And 105, generating the heading angle of the obstacle vehicle according to the unit vector of the heading angle of the obstacle vehicle.
In some embodiments, the execution subject may generate the obstacle vehicle heading angle according to the obstacle vehicle heading angle unit vector. The heading angle of the obstacle vehicle can represent the rotating direction and angle of the obstacle vehicle. In practice, the executing body may input the first element of the unit vector of the heading angle of the obstacle vehicle to the arccosine function to obtain the numerical value of the heading angle of the obstacle vehicle. Then, the second element of the unit vector of the heading angle of the obstacle vehicle may be input to the sine function to obtain the rotation direction. Here, the rotation direction may be indicated by a positive or negative sign. Finally, the rotation direction and the obstacle vehicle heading angle value may be combined into an obstacle vehicle heading angle. Therefore, the rotating direction and the rotating angle of the obstacle vehicle can be obtained, and the accuracy of the heading angle of the obstacle vehicle is further improved.
And 106, generating a rotation matrix of the obstacle vehicle in a vehicle coordinate system according to the heading angle of the obstacle vehicle.
In some embodiments, the execution subject may generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the obstacle vehicle heading angle. Wherein, the rotating direction of the obstacle vehicle may be a clockwise direction or a counterclockwise direction. The rotation matrix may be a rotation posture of the obstacle vehicle around the target vehicle. In practice, the execution body may generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the heading angle of the obstacle vehicle in various ways.
In some optional implementations of some embodiments, the executing entity may generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the obstacle vehicle heading angle by:
the method comprises the following steps of firstly, responding to the fact that the heading angle of the obstacle vehicle is larger than a first preset value, and determining the rotating direction of the obstacle vehicle to be the clockwise direction. For example, the first preset value may be 0. Here, the specific setting of the first preset numerical value is not limited.
And secondly, determining the rotation direction of the obstacle vehicle to be a counterclockwise direction in response to determining that the heading angle of the obstacle vehicle is smaller than the first preset value.
And thirdly, determining whether the front camera is horizontally installed.
And fourthly, in response to the fact that the front camera is horizontally installed, determining second preset values as a barrier vehicle pitch angle and a barrier vehicle roll angle respectively. In practice, the values of the obstacle vehicle pitch angle and the obstacle vehicle roll angle may be determined as second preset values, respectively. For example, the second predetermined value may be 0. Here, the specific setting of the second preset value is not limited.
And fifthly, generating a rotation matrix according to the pitch angle of the obstacle vehicle, the roll angle of the obstacle vehicle and the course angle of the obstacle vehicle. In practice, first, the actuator body may rotate the obstacle vehicle pitch angle about the x-axis to obtain an x-axis rotation matrix. Then, the obstacle vehicle roll angle is rotated around the y-axis to obtain a y-axis rotation matrix. And then, rotating the heading angle of the obstacle vehicle around a z-axis to obtain a z-axis rotation matrix. And finally, determining the product of the x-axis rotation matrix, the y-axis rotation matrix and the z-axis rotation matrix as the rotation matrix of the obstacle vehicle. Thereby, the rotation matrix can be generated, and the rotation posture of the obstacle vehicle can be further obtained.
And step 107, generating the central coordinates of the obstacle vehicle in the vehicle coordinate system according to the rotation matrix.
In some embodiments, the execution body may generate center coordinates of the obstacle vehicle in a vehicle coordinate system based on the rotation matrix. In practice, the execution body may generate the center coordinates of the obstacle vehicle in the vehicle coordinate system from the rotation matrix in various ways.
In some optional implementations of some embodiments, the executing body may generate the center coordinates of the obstacle vehicle in the vehicle coordinate system according to the rotation matrix by:
firstly, inputting the foreground image into a vehicle type detection model trained in advance to obtain vehicle type information of the obstacle vehicle. The vehicle type detection model can be used for detecting type information of the obstacle vehicle. The vehicle type detection model may be, but is not limited to, one of the following: a CooVally detection model and an OpenVINO detection model. The vehicle type information may include a vehicle type. For example, the vehicle type may be, but is not limited to, one of the following: trucks, buses.
And secondly, selecting preset vehicle type information with the same preset vehicle type as the vehicle type from the preset vehicle type information set as target preset vehicle type information. The preset vehicle type information in the preset vehicle type information set may include a preset vehicle type and preset vehicle information corresponding to the preset vehicle type. The preset vehicle information may include a preset wheel base, a preset vehicle length, a preset vehicle width, and a preset vehicle height.
And thirdly, determining a preset wheel base included in the target preset vehicle type information as the vehicle wheel base of the obstacle vehicle.
And fourthly, determining the preset vehicle length included by the target preset vehicle type information as the vehicle length of the obstacle vehicle.
And fifthly, determining the preset vehicle width included in the target preset vehicle type information as the vehicle width of the obstacle vehicle.
And sixthly, determining the preset vehicle height included by the target preset vehicle type information as the vehicle height of the obstacle vehicle.
Seventh, a wheel grounding point is selected from the respective wheel grounding points as a target wheel grounding point. The target wheel ground point may be any wheel ground point of an obstacle vehicle.
And an eighth step of determining coordinates of the target wheel grounding point in a coordinate system of the obstacle vehicle as obstacle vehicle coordinates based on the three-dimensional coordinates of the vehicle wheel base, the vehicle length, the vehicle width, the vehicle height, and the target wheel grounding point. In practice, first, the execution subject may determine a ratio of the vehicle wheel base to the vehicle length. Then, the above ratio is compared with the length of the vehicleThe product of (a) and (b) is determined as the x-coordinate of the target wheel grounding point in the coordinate system of the obstacle vehicle. Then, the vehicle is made to have a vehicle widthThe y-coordinate of the target wheel grounding point in the coordinate system of the obstacle vehicle is determined. Finally, the z-coordinate of the target wheel grounding point in the coordinate system of the obstacle vehicle is determined by using the established coordinate system of the obstacle vehicle. For example, the z-coordinate of the target wheel grounding point in the coordinate system of the obstacle vehicle may be 0.
And a ninth step of determining a bottom edge center coordinate of the obstacle vehicle based on the three-dimensional coordinate of the target wheel grounding point, the obstacle vehicle coordinate, and the rotation matrix. In practice, first, the execution subject may determine a product of the rotation matrix and the obstacle vehicle coordinates. Then, a difference between the three-dimensional coordinate of the target wheel grounding point and the product may be determined as a bottom side center coordinate of the obstacle vehicle.
Tenth, the center coordinates of the obstacle vehicle in the vehicle coordinate system are determined based on the vehicle height and the bottom center coordinates. In practice, first, the execution body may determine an x-coordinate of the center coordinate of the bottom side as an x-coordinate of the center coordinate. Then, the y-coordinate of the bottom center coordinate is determined as the y-coordinate of the center coordinate. Finally, the z coordinate of the bottom edge center coordinate and the vehicle height are comparedIs determined as the z coordinate of the center coordinate. For example, the vehicle coordinate system may be a coordinate system established with the bottom center as an origin. Thereby, the center coordinates of the obstacle vehicle in the vehicle coordinate system can be generated, and thus, the bounding box of the obstacle vehicle can be further generated.
And step 108, generating a bounding box of the obstacle vehicle according to the central coordinate and the rotation matrix.
In some embodiments, the execution body may generate a bounding box of the obstacle vehicle based on the center coordinates and the rotation matrix. In practice, the execution body described above may generate the bounding box of the obstacle vehicle from the center coordinates and the rotation matrix in various ways.
In some optional implementations of some embodiments, the executing body may generate the bounding box of the obstacle vehicle according to the center coordinates and the rotation matrix by:
first, bounding box coordinate information of the obstacle vehicle is determined based on the vehicle length, the vehicle width, and the vehicle height. In practice, the execution body may determine bounding box coordinate information of the obstacle vehicle by using a bounding box algorithm based on the vehicle length, the vehicle width, and the vehicle height to determine frame data of a bounding box. For example, the bounding box algorithm described above may be, but is not limited to, one of the following: AABB bounding box, OBB bounding box.
And secondly, generating bounding box position information according to the rotation matrix and the central coordinate. In practice, first, the execution body may determine the rotation matrix as a bounding box rotation posture. The center coordinates can then be determined as bounding box center coordinates. Finally, the bounding box rotational attitude and the bounding box center coordinate may be combined into bounding box position information.
And thirdly, generating a bounding box of the obstacle vehicle according to the bounding box coordinate information and the bounding box position information. In practice, the execution body may combine the bounding box coordinate information and the bounding box position information to obtain bounding box information, and create a bounding box. Thus, the bounding box of the obstacle vehicle can be generated, and the coordinate information and the position information of the obstacle vehicle can be obtained.
The technical scheme is used as an invention point of the embodiment of the disclosure, and solves the technical problems mentioned in the background technology that when the obstacle vehicle is seriously shielded, the accuracy of the position of the obstacle vehicle generated by the neural network model is low, and the times that the running vehicle cannot safely avoid the obstacle vehicle are more, and the safety is poor. The number of times that the running vehicle cannot safely avoid the obstacle vehicle is large, and factors with poor safety are as follows: when the obstruction of the obstacle vehicle is serious, the accuracy of the position of the obstacle vehicle generated through the neural network model is low. If the factors are solved, the effects of reducing the times that the running vehicle cannot safely avoid the obstacle vehicle and improving the running safety can be achieved. In order to achieve the effect, the center coordinates of the obstacle vehicle in the vehicle coordinate system are generated by using the rotation matrix when the obstacle vehicle is seriously shielded and the generated position accuracy of the obstacle vehicle is low. And then, the obtained center coordinates and the rotation matrix are processed to generate a bounding box of the obstacle vehicle. The bounding box may represent position information, coordinate information of the obstacle vehicle. Therefore, the accuracy of the position of the generated obstacle vehicle is improved, the frequency that the running vehicle cannot safely avoid the obstacle vehicle is reduced, and the running safety is improved.
And step 109, generating a running route of the target vehicle based on the bounding box.
In some embodiments, the execution body may generate the travel route of the target vehicle based on the bounding box. The travel route may be a route on which the target vehicle travels. In practice, the execution body may generate the travel route of the target vehicle based on the bounding box in various ways.
In some optional implementations of some embodiments, the executing body may generate the driving route of the target vehicle based on the bounding box by:
first, the shortest distance between the obstacle vehicle and the target vehicle is determined according to the bounding box. In practice, the execution body may determine the coordinates and the position information of the bounding box as the coordinates and the position information of the obstacle vehicle. Then, the distance between the coordinates of the obstacle vehicle and the coordinates of the target vehicle may be determined as the shortest distance between the obstacle vehicle and the target vehicle.
And a second step of determining the original driving route of the target vehicle as the driving route of the target vehicle in response to the determination that the shortest distance is greater than or equal to a preset shortest distance.
And thirdly, generating an optimal running route of the target vehicle for avoiding the obstacle vehicle as the running route of the target vehicle in response to the fact that the shortest distance is smaller than the preset shortest distance. In practice, the executing body may generate the driving route of the target vehicle by using an obstacle avoidance path planning algorithm. For example, the preset shortest distance may be 1 meter.
In some optional implementation manners of some embodiments, the execution main body may generate an optimal driving route of the target vehicle avoiding the obstacle vehicle as the driving route of the target vehicle according to the coordinate information and the position information of the bounding box. In practice, the executing body may generate the driving route of the target vehicle by using an obstacle avoidance path planning algorithm.
Alternatively, after step 109, the executing body may further control the target vehicle to travel according to the travel route.
The above embodiments of the present disclosure have the following advantages: by means of the driving route generation method of some embodiments of the disclosure, the number of times that the driving vehicle cannot safely avoid the obstacle vehicle is reduced, and driving safety is improved. Specifically, the number of times the traveling vehicle cannot safely avoid the obstacle vehicle is large, and the reason for the poor safety is that: when the barrier vehicle is seriously shielded, the accuracy of the position of the barrier vehicle generated by the neural network model is low, so that the times that the running vehicle cannot safely avoid the barrier vehicle are more, and the safety is poor. Based on this, the driving route generation method of some embodiments of the present disclosure first determines respective two-dimensional coordinates of respective wheel grounding points of the obstacle vehicle from a foreground image acquired by a front camera of the target vehicle. Thereby, the coordinates of the respective wheel contact points in the foreground image can be determined. Then, three-dimensional coordinates of the wheel grounding points in a vehicle coordinate system of the target vehicle are determined based on the two-dimensional coordinates. Thereby, respective three-dimensional coordinates of respective wheel grounding points can be obtained. And then, determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle. Thus, the initial obstacle vehicle heading angle may be obtained and may be used to generate the obstacle vehicle heading angle unit vector. And secondly, generating a unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the three-dimensional coordinates. Therefore, the unit vector of the heading angle of the obstacle vehicle can be obtained, and the accuracy of the heading angle of the obstacle vehicle can be improved. And then, generating the course angle of the obstacle vehicle according to the unit vector of the course angle of the obstacle vehicle. Therefore, the heading angle of the obstacle vehicle can be obtained, and the accuracy of the heading angle of the obstacle vehicle is improved. And then, generating a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the heading angle of the obstacle vehicle. Thus, the rotation matrix may characterize the rotational pose of the obstacle vehicle. Next, center coordinates of the obstacle vehicle in the vehicle coordinate system are generated based on the rotation matrix. Thus, the center coordinates may represent the coordinate position of the obstacle vehicle. Next, a bounding box of the obstacle vehicle is generated based on the center coordinates and the rotation matrix. Thus, the bounding box may characterize the rotational direction and rotational attitude of the obstacle vehicle. Finally, a travel route of the target vehicle is generated based on the bounding box. Thus, the travel route can be referred to as a route when the target vehicle travels. The course angle of the obstacle vehicle is not determined directly through the neural network model, but is generated through three-dimensional coordinates of grounding points of wheels of the obstacle vehicle and the initial obstacle vehicle course angle detected by the target detection model, and therefore the accuracy of the course angle of the obstacle vehicle is improved. And because the rotation matrix of the obstacle vehicle and the central coordinates of the obstacle vehicle are determined, the bounding box of the obstacle vehicle is further generated, and the position and posture information of the obstacle vehicle is further determined. Therefore, the running route of the target vehicle is generated, the frequency that the running vehicle cannot safely avoid the obstacle vehicle can be further reduced through the generated running route, and the running safety is improved.
With further reference to fig. 3, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a driving route generation apparatus, which correspond to those method embodiments illustrated in fig. 3, and which may be applied in particular to various electronic devices.
As shown in fig. 3, a travel route generation device 300 of some embodiments includes: a first determining unit 301, a second determining unit 302, a third determining unit 303, a first generating unit 304, a second generating unit 305, a third generating unit 306, a fourth generating unit 307, a fifth generating unit 308, and a sixth generating unit 309. Wherein the first determining unit 301 is configured to determine respective two-dimensional coordinates of respective wheel grounding points of the obstacle vehicle from the foreground image acquired by the front camera of the target vehicle; the second determining unit 302 is configured to determine, from the respective two-dimensional coordinates, respective three-dimensional coordinates of the respective wheel grounding points in the vehicle coordinate system of the target vehicle; the third determining unit 303 is configured to determine the heading angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle heading angle; the first generating unit 304 is configured to generate a unit vector of the heading angle of the obstacle vehicle according to the initial obstacle vehicle heading angle and the respective three-dimensional coordinates; the second generating unit 305 is configured to generate an obstacle vehicle heading angle from the obstacle vehicle heading angle unit vector; the third generating unit 306 is configured to generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the obstacle vehicle heading angle; the fourth generation unit 307 is configured to generate center coordinates of the obstacle vehicle in the vehicle coordinate system, based on the rotation matrix; the fifth generating unit 308 is configured to generate a bounding box of the obstacle vehicle based on the center coordinates and the rotation matrix; the sixth generating unit 309 is configured to generate the travel route of the target vehicle based on the bounding box.
It will be appreciated that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, shown is a schematic block diagram of an electronic device 400 (e.g., a computing device) suitable for use in implementing some embodiments of the present disclosure. The electronic device in some embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device 400 may include a processing means 401 (e.g., a central processing unit, a graphics processor, etc.) that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through communications device 409, or installed from storage device 408, or installed from ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining each two-dimensional coordinate of each wheel grounding point of the obstacle vehicle according to the foreground image acquired by the front camera of the target vehicle; determining three-dimensional coordinates of the wheel contact points in a vehicle coordinate system of the target vehicle based on the two-dimensional coordinates; determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle; generating a unit vector of the course angle of the obstacle vehicle according to the course angle of the initial obstacle vehicle and the three-dimensional coordinates; generating a course angle of the obstacle vehicle according to the course angle unit vector of the obstacle vehicle; generating a rotation matrix of the obstacle vehicle under the vehicle coordinate system according to the heading angle of the obstacle vehicle; generating center coordinates of the obstacle vehicle in the vehicle coordinate system based on the rotation matrix; generating a bounding box of the obstacle vehicle according to the central coordinate and the rotation matrix; and generating a driving route of the target vehicle based on the bounding box.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determining unit, a second determining unit, a third determining unit, a first generating unit, a second generating unit, a third generating unit, a fourth generating unit, a fifth generating unit, and a sixth generating unit. Where the names of the units do not in some cases constitute a limitation on the units themselves, the first determination unit may also be described as a "unit that determines respective two-dimensional coordinates of respective wheel contact points of the obstacle vehicle from a foreground image acquired by a front camera of the target vehicle", for example.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.
Claims (8)
1. A travel route generation method comprising:
determining two-dimensional coordinates of grounding points of wheels of a barrier vehicle according to a foreground image acquired by a front camera of a target vehicle, wherein the foreground image is a shot image of the barrier vehicle, the grounding points of the wheels are tangent points of four wheels of the barrier vehicle and the ground, the two-dimensional coordinates are two-dimensional coordinates of the grounding points of the wheels in the foreground image, and the two-dimensional coordinates of the grounding points of the wheels of the barrier vehicle are determined according to the foreground image acquired by the front camera of the target vehicle, and the method comprises the following steps:
inputting the foreground image into a pre-trained wheel grounding point detection model to obtain each two-dimensional coordinate of each wheel grounding point, wherein the wheel grounding point detection model is a neural network model which takes a vehicle image as input data and takes the two-dimensional coordinate of the identified wheel grounding point as output data;
determining each three-dimensional coordinate of each wheel grounding point under a vehicle coordinate system of the target vehicle according to each two-dimensional coordinate;
determining the course angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle course angle;
generating a unit vector of the course angle of the obstacle vehicle according to the course angle of the initial obstacle vehicle and each three-dimensional coordinate, wherein the generating of the unit vector of the course angle of the obstacle vehicle according to the course angle of the initial obstacle vehicle and each three-dimensional coordinate comprises the following steps:
inputting the initial obstacle vehicle course angle and a preset obstacle vehicle course angle unit vector to a preset course angle constraint function to obtain initial obstacle vehicle course angle constraint, wherein the preset obstacle vehicle course angle unit vector is the course angle unit vector of the obstacle vehicle, and the preset course angle constraint function is the constraint function for constraining the preset obstacle vehicle course angle unit vector and the initial obstacle vehicle course angle;
for every two three-dimensional coordinates of which the horizontal axes are adjacent, the following steps are executed:
forming a three-dimensional coordinate vector by the two three-dimensional coordinates;
inputting the formed three-dimensional coordinate vector and the unit vector of the preset barrier vehicle course angle into a preset transverse-axis three-dimensional coordinate vector constraint function to obtain a transverse-axis three-dimensional coordinate vector constraint, wherein the preset transverse-axis three-dimensional coordinate vector constraint function is a constraint function for the unit vector of the preset barrier vehicle course angle;
for every two three-dimensional coordinates of which the longitudinal axes are adjacent, the following steps are executed:
forming a three-dimensional coordinate vector by the two three-dimensional coordinates;
inputting the formed three-dimensional coordinate vector and the unit vector of the preset barrier vehicle course angle into a preset longitudinal axis three-dimensional coordinate vector constraint function to obtain longitudinal axis three-dimensional coordinate vector constraint, wherein the preset longitudinal axis three-dimensional coordinate vector constraint function is a constraint function on the unit vector of the preset barrier vehicle course angle;
establishing course angle unit vector constraints of the obstacle vehicles according to the initial obstacle vehicle course angle constraints, the obtained three-dimensional coordinate vector constraints of each transverse axis and the three-dimensional coordinate vector constraints of each longitudinal axis, wherein the course angle unit vector constraints of the obstacle vehicles are the initial obstacle vehicle course angle constraints, the obtained three-dimensional coordinate vector constraints of each transverse axis and the three-dimensional coordinate vector constraints of each longitudinal axis;
generating a unit vector of the heading angle of the obstacle vehicle according to the unit vector constraint of the heading angle of the obstacle vehicle;
generating a course angle of the obstacle vehicle according to the unit vector of the course angle of the obstacle vehicle;
generating a rotation matrix of the obstacle vehicle under the vehicle coordinate system according to the heading angle of the obstacle vehicle;
generating center coordinates of the obstacle vehicle in the vehicle coordinate system according to the rotation matrix, wherein the generating center coordinates of the obstacle vehicle in the vehicle coordinate system according to the rotation matrix comprises:
inputting the foreground image into a pre-trained vehicle type detection model to obtain vehicle type information of the obstacle vehicle, wherein the vehicle type information comprises a vehicle type;
selecting preset vehicle type information, which includes a preset vehicle type identical to the vehicle type, from a preset vehicle type information set as target preset vehicle type information, wherein the preset vehicle type information in the preset vehicle type information set includes the preset vehicle type and preset vehicle information corresponding to the preset vehicle type, and the preset vehicle information includes a preset wheel base, a preset vehicle length, a preset vehicle width and a preset vehicle height;
determining a preset wheel base included in the target preset vehicle type information as a vehicle wheel base of the obstacle vehicle;
determining a preset vehicle length included in the target preset vehicle type information as a vehicle length of the obstacle vehicle;
determining a preset vehicle width included in the target preset vehicle type information as a vehicle width of the obstacle vehicle;
determining a preset vehicle height included in the target preset vehicle type information as a vehicle height of the obstacle vehicle;
selecting a wheel ground point from the respective wheel ground points as a target wheel ground point, wherein the target wheel ground point is any wheel ground point of the obstacle vehicle;
determining coordinates of the target wheel grounding point in a coordinate system of the obstacle vehicle as obstacle vehicle coordinates according to the three-dimensional coordinates of the vehicle wheel base, the vehicle length, the vehicle width, the vehicle height, and the target wheel grounding point;
determining a bottom edge center distance of the obstacle vehicle according to the three-dimensional coordinates of the target wheel grounding point, the coordinates of the obstacle vehicle and the rotation matrix;
determining the center coordinate of the obstacle vehicle under the vehicle coordinate system according to the vehicle height and the bottom edge center distance; generating a bounding box of the obstacle vehicle according to the central coordinate and the rotation matrix;
generating a driving route of the target vehicle based on the bounding box.
2. The method of claim 1, wherein said generating a rotation matrix of the obstacle vehicle in the vehicle coordinate system based on the obstacle vehicle heading angle comprises:
determining the rotation direction of the obstacle vehicle to be a clockwise direction in response to determining that the heading angle of the obstacle vehicle is greater than a first preset value;
determining a rotation direction of the obstacle vehicle as a counterclockwise direction in response to determining that the obstacle vehicle heading angle is less than the first preset value;
determining whether the front camera is horizontally installed;
in response to the fact that the front camera is horizontally installed, determining second preset values as a barrier vehicle pitch angle and a barrier vehicle roll angle respectively;
and generating a rotation matrix according to the obstacle vehicle pitch angle, the obstacle vehicle roll angle and the obstacle vehicle course angle.
3. The method of claim 1, wherein the generating a travel route for the target vehicle based on the bounding box comprises:
determining the shortest distance between the obstacle vehicle and the target vehicle according to the bounding box;
in response to determining that the shortest distance is greater than or equal to a preset shortest distance, determining an original driving route of the target vehicle as a driving route of the target vehicle;
generating an optimal travel route for the target vehicle to avoid the obstacle vehicle as the travel route of the target vehicle in response to determining that the shortest distance is less than the preset shortest distance.
4. The method of claim 1, wherein the generating a travel route for the target vehicle based on the bounding box comprises:
and generating an optimal running route of the target vehicle avoiding the obstacle vehicle as the running route of the target vehicle according to the position information of the bounding box.
5. The method according to one of claims 1-4, wherein the method further comprises:
and controlling the target vehicle to run according to the running route.
6. A travel route generation device comprising:
a first determining unit configured to determine respective two-dimensional coordinates of respective wheel contact points of an obstacle vehicle, which are respective two-dimensional coordinates of the respective wheel contact points in the foreground image, based on the foreground image acquired by the front camera of the target vehicle, wherein the foreground image is a photographed image of the obstacle vehicle, the respective wheel contact points are tangent points of four wheels of the obstacle vehicle to the ground, and the respective two-dimensional coordinates are respective two-dimensional coordinates of the respective wheel contact points in the foreground image, and wherein the determining of the respective two-dimensional coordinates of the respective wheel contact points of the obstacle vehicle, based on the foreground image acquired by the front camera of the target vehicle, includes:
inputting the foreground image into a pre-trained wheel grounding point detection model to obtain each two-dimensional coordinate of each wheel grounding point, wherein the wheel grounding point detection model is a neural network model which takes a vehicle image as input data and takes the two-dimensional coordinate of the identified wheel grounding point as output data;
a second determination unit configured to determine, from the respective two-dimensional coordinates, respective three-dimensional coordinates of the respective wheel grounding points in a vehicle coordinate system of the target vehicle;
a third determination unit configured to determine a heading angle of the obstacle vehicle detected by the target detection model as an initial obstacle vehicle heading angle;
a first generating unit configured to generate a unit vector of a heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the respective three-dimensional coordinates, wherein the generating of the unit vector of the heading angle of the obstacle vehicle according to the initial heading angle of the obstacle vehicle and the respective three-dimensional coordinates comprises:
inputting the initial obstacle vehicle course angle and a preset obstacle vehicle course angle unit vector to a preset course angle constraint function to obtain initial obstacle vehicle course angle constraint, wherein the preset obstacle vehicle course angle unit vector is the course angle unit vector of the obstacle vehicle, and the preset course angle constraint function is the constraint function for constraining the preset obstacle vehicle course angle unit vector and the initial obstacle vehicle course angle;
for every two three-dimensional coordinates of which the horizontal axes are adjacent, the following steps are executed:
forming a three-dimensional coordinate vector by the two three-dimensional coordinates;
inputting the formed three-dimensional coordinate vector and the unit vector of the preset barrier vehicle course angle into a preset cross-axis three-dimensional coordinate vector constraint function to obtain cross-axis three-dimensional coordinate vector constraint, wherein the preset cross-axis three-dimensional coordinate vector constraint function is a constraint function on the unit vector of the preset barrier vehicle course angle;
for each two three-dimensional coordinates of the respective three-dimensional coordinates whose longitudinal axes are adjacent, performing the following steps:
forming a three-dimensional coordinate vector by the two three-dimensional coordinates;
inputting the formed three-dimensional coordinate vector and the unit vector of the preset barrier vehicle course angle into a preset longitudinal axis three-dimensional coordinate vector constraint function to obtain longitudinal axis three-dimensional coordinate vector constraint, wherein the preset longitudinal axis three-dimensional coordinate vector constraint function is a constraint function on the unit vector of the preset barrier vehicle course angle;
establishing course angle unit vector constraints of the obstacle vehicles according to the initial obstacle vehicle course angle constraints, the obtained three-dimensional coordinate vector constraints of each transverse axis and the three-dimensional coordinate vector constraints of each longitudinal axis, wherein the course angle unit vector constraints of the obstacle vehicles are the initial obstacle vehicle course angle constraints, the obtained three-dimensional coordinate vector constraints of each transverse axis and the three-dimensional coordinate vector constraints of each longitudinal axis;
generating a unit vector of the heading angle of the obstacle vehicle according to the unit vector constraint of the heading angle of the obstacle vehicle;
a second generating unit configured to generate a heading angle of the obstacle vehicle according to the unit vector of the heading angle of the obstacle vehicle;
a third generating unit configured to generate a rotation matrix of the obstacle vehicle in the vehicle coordinate system according to the obstacle vehicle heading angle;
a fourth generating unit configured to generate center coordinates of the obstacle vehicle in the vehicle coordinate system based on the rotation matrix, wherein the generating of the center coordinates of the obstacle vehicle in the vehicle coordinate system based on the rotation matrix includes:
inputting the foreground image into a pre-trained vehicle type detection model to obtain vehicle type information of the obstacle vehicle, wherein the vehicle type information comprises a vehicle type;
selecting preset vehicle type information, which includes a preset vehicle type identical to the vehicle type, from a preset vehicle type information set as target preset vehicle type information, wherein the preset vehicle type information in the preset vehicle type information set includes the preset vehicle type and preset vehicle information corresponding to the preset vehicle type, and the preset vehicle information includes a preset wheel base, a preset vehicle length, a preset vehicle width and a preset vehicle height;
determining a preset wheel base included in the target preset vehicle type information as a vehicle wheel base of the obstacle vehicle;
determining a preset vehicle length included in the target preset vehicle type information as a vehicle length of the obstacle vehicle;
determining a preset vehicle width included in the target preset vehicle type information as a vehicle width of the obstacle vehicle;
determining a preset vehicle height included in the target preset vehicle type information as a vehicle height of the obstacle vehicle;
selecting a wheel grounding point from the individual wheel grounding points as a target wheel grounding point, wherein the target wheel grounding point is an arbitrary wheel grounding point of the obstacle vehicle;
determining coordinates of the target wheel grounding point in a coordinate system of the obstacle vehicle as obstacle vehicle coordinates according to the three-dimensional coordinates of the vehicle wheel base, the vehicle length, the vehicle width, the vehicle height, and the target wheel grounding point;
determining a bottom edge center distance of the obstacle vehicle according to the three-dimensional coordinates of the target wheel grounding point, the coordinates of the obstacle vehicle and the rotation matrix;
determining the center coordinate of the obstacle vehicle under the vehicle coordinate system according to the vehicle height and the bottom edge center distance;
a fifth generation unit configured to generate a bounding box of the obstacle vehicle from the center coordinates and the rotation matrix;
a sixth generation unit configured to generate a travel route of the target vehicle based on the bounding box.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211422241.6A CN115588185B (en) | 2022-11-15 | 2022-11-15 | Driving route generation method and device, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211422241.6A CN115588185B (en) | 2022-11-15 | 2022-11-15 | Driving route generation method and device, electronic equipment and computer readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115588185A CN115588185A (en) | 2023-01-10 |
CN115588185B true CN115588185B (en) | 2023-03-14 |
Family
ID=84782990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211422241.6A Active CN115588185B (en) | 2022-11-15 | 2022-11-15 | Driving route generation method and device, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115588185B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110962844A (en) * | 2019-10-28 | 2020-04-07 | 纵目科技(上海)股份有限公司 | Vehicle course angle correction method and system, storage medium and terminal |
CN114973198A (en) * | 2022-05-27 | 2022-08-30 | 智道网联科技(北京)有限公司 | Course angle prediction method and device of target vehicle, electronic equipment and storage medium |
CN115257727A (en) * | 2022-09-27 | 2022-11-01 | 禾多科技(北京)有限公司 | Obstacle information fusion method and device, electronic equipment and computer readable medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107702716B (en) * | 2017-08-31 | 2021-04-13 | 广州小鹏汽车科技有限公司 | Unmanned driving path planning method, system and device |
US11181921B2 (en) * | 2018-09-14 | 2021-11-23 | Huawei Technologies Co., Ltd. | System and method for hierarchical planning in autonomous vehicles |
CN115185271B (en) * | 2022-06-29 | 2023-05-23 | 禾多科技(北京)有限公司 | Navigation path generation method, device, electronic equipment and computer readable medium |
-
2022
- 2022-11-15 CN CN202211422241.6A patent/CN115588185B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110962844A (en) * | 2019-10-28 | 2020-04-07 | 纵目科技(上海)股份有限公司 | Vehicle course angle correction method and system, storage medium and terminal |
CN114973198A (en) * | 2022-05-27 | 2022-08-30 | 智道网联科技(北京)有限公司 | Course angle prediction method and device of target vehicle, electronic equipment and storage medium |
CN115257727A (en) * | 2022-09-27 | 2022-11-01 | 禾多科技(北京)有限公司 | Obstacle information fusion method and device, electronic equipment and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN115588185A (en) | 2023-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114419604B (en) | Obstacle information generation method and device, electronic equipment and computer readable medium | |
CN112348029B (en) | Local map adjusting method, device, equipment and computer readable medium | |
CN114663529B (en) | External parameter determining method and device, electronic equipment and storage medium | |
CN115540894B (en) | Vehicle trajectory planning method and device, electronic equipment and computer readable medium | |
CN114993328B (en) | Vehicle positioning evaluation method, device, equipment and computer readable medium | |
CN112561990B (en) | Positioning information generation method, device, equipment and computer readable medium | |
CN114399588A (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN115817463B (en) | Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium | |
CN115617051A (en) | Vehicle control method, device, equipment and computer readable medium | |
CN110717467A (en) | Head pose estimation method, device, equipment and storage medium | |
CN114445597A (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN115588185B (en) | Driving route generation method and device, electronic equipment and computer readable medium | |
CN112590929B (en) | Correction method, apparatus, electronic device, and medium for steering wheel of autonomous vehicle | |
CN111338339B (en) | Track planning method, track planning device, electronic equipment and computer readable medium | |
CN113673446A (en) | Image recognition method and device, electronic equipment and computer readable medium | |
CN113778078A (en) | Positioning information generation method and device, electronic equipment and computer readable medium | |
JP7425169B2 (en) | Image processing method, device, electronic device, storage medium and computer program | |
CN115610415B (en) | Vehicle distance control method, device, electronic equipment and computer readable medium | |
CN114723640B (en) | Obstacle information generation method and device, electronic equipment and computer readable medium | |
CN113804196B (en) | Unmanned vehicle path planning method and related equipment | |
CN113805578B (en) | Unmanned vehicle path optimization method and related equipment | |
CN116414120A (en) | Path tracking method, path tracking device, electronic equipment and storage medium | |
CN113804208B (en) | Unmanned vehicle path optimization method and related equipment | |
CN114419299A (en) | Virtual object generation method, device, equipment and storage medium | |
CN114494428B (en) | Vehicle pose correction method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |