[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113869607A - Safe driving prediction optimization method, device, equipment and storage medium - Google Patents

Safe driving prediction optimization method, device, equipment and storage medium Download PDF

Info

Publication number
CN113869607A
CN113869607A CN202111265707.1A CN202111265707A CN113869607A CN 113869607 A CN113869607 A CN 113869607A CN 202111265707 A CN202111265707 A CN 202111265707A CN 113869607 A CN113869607 A CN 113869607A
Authority
CN
China
Prior art keywords
ttc
optimization model
model
vehicle
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111265707.1A
Other languages
Chinese (zh)
Inventor
戚悦
胡美玉
苏向阳
朱俊辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN202111265707.1A priority Critical patent/CN113869607A/en
Publication of CN113869607A publication Critical patent/CN113869607A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a safe driving prediction optimization method, apparatus, device and storage medium, the method comprising: the method comprises the steps of obtaining a current frame and a candidate frame, obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame, and obtaining a first TTC optimization model according to the TTC model and preset conditions, wherein the first TTC optimization model is used for predicting safe driving of a vehicle. According to the technical scheme, the safety prediction is carried out based on the collected video frame, a longer distance can be sensed, the safety state of the intersection can be predicted in advance, so that the driving of the vehicle can be planned in advance, in addition, the driving condition of the vehicle is predicted through the small hole imaging model and the first TTC optimization model determined by the preset conditions, the prediction process is enabled to be closer to the actual condition of the vehicle, and the prediction accuracy is improved.

Description

Safe driving prediction optimization method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of unmanned driving technologies, and in particular, to a safe driving prediction optimization method, apparatus, device, and storage medium.
Background
In the field of unmanned driving, in order to ensure driving safety, an unmanned vehicle needs to sense the surrounding environment. When the vehicle needs to turn when passing through the intersection, when the road right is in a weak condition (for example, the vehicle should give the lead vehicle when turning without protection in the left-turn process), the safe running of the vehicle needs to have a stronger sensing distance in a specific direction as a guarantee.
In the prior art, the intersection environment can be generally sensed through a laser radar (lidar), but the sensing radius of the lidar is influenced by a laser radar beam, and the sensing distance is short (within 30 meters). There is a safety concern that a turning vehicle (especially a vehicle towing several pieces of cargo) may not be able to perceive a vehicle that is approaching the vehicle quickly.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a safe driving prediction optimization method, apparatus, device, and storage medium.
In a first aspect, an embodiment of the present disclosure provides a safe driving prediction optimization method, including:
acquiring a current frame and a candidate frame, and obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame;
obtaining a first TTC optimization model according to the TTC model and preset conditions;
the first TTC optimization model is used for predicting safe running of the vehicle.
In a second aspect, an embodiment of the present disclosure provides a safe driving prediction optimization apparatus, including:
the TTC model determining module is used for acquiring a current frame and a candidate frame and obtaining a TTC model through a small-hole imaging model according to the current frame and the candidate frame;
the first optimized TTC determining module is used for obtaining a first TTC optimized model according to the TTC model and preset conditions;
the first TTC optimization model is used for predicting safe running of the vehicle.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method as defined in any one of the above first aspects.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method according to any of the first aspects described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: the embodiment of the disclosure provides a safe driving prediction optimization method, a safe driving prediction optimization device, equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a current frame and a candidate frame, obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame, and obtaining a first TTC optimization model according to the TTC model and preset conditions, wherein the first TTC optimization model is used for predicting safe driving of a vehicle. According to the technical scheme, the collision time is determined based on the collected video frames, the farther distance can be sensed, the safety state of the intersection is given, and the driving of the vehicle is planned in advance.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Figure 1 is a system block diagram of a security prediction optimization method provided by an embodiment of the present disclosure,
FIG. 2 is a flow chart of a safe driving prediction optimization method provided by an embodiment of the present disclosure;
FIG. 3a is a diagram of a data storage structure of a detection result provided by an embodiment of the present disclosure;
FIG. 3b is a view of the bbox data storage structure provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an object in an image corresponding to a tracker provided in an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a data storage structure of tracker information provided in an embodiment of the present disclosure;
fig. 6 is a flowchart of a safe driving prediction method provided by an embodiment of the present disclosure;
fig. 7 is a flow chart of selecting candidate frames according to the present disclosure;
fig. 8 is a flowchart for solving the second TTC optimization model according to the embodiment of the present disclosure;
fig. 9a is a schematic diagram of a ranging result provided by an embodiment of the present disclosure;
fig. 9b is a schematic diagram of a ranging result provided by an embodiment of the present disclosure;
FIG. 10a is a schematic diagram of a velocity fit provided by an embodiment of the present disclosure;
FIG. 10b is a schematic diagram of a velocity fit provided by an embodiment of the present disclosure;
fig. 11 is a schematic diagram comparing TTCs provided by the embodiments of the present disclosure;
fig. 12 is a block diagram of a safe driving prediction optimizing apparatus provided in an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Fig. 1 is a flowchart of a safe driving prediction optimization method provided in an embodiment of the present disclosure, where the method may be applied to a case where a driving condition of an unmanned vehicle is predicted in advance, and the method may be executed by a safe driving prediction optimization device, which may be implemented in software and/or hardware, and may be configured in a visual perception module in the unmanned vehicle.
The safe driving prediction optimization method provided by the embodiment of the disclosure is mainly applied to the field of unmanned driving, and as shown in fig. 2, a system structure for carrying the safe driving prediction optimization method may include: the device comprises a regulation and control module, an image acquisition module and a visual perception module. The image acquisition module may include a camera, and the camera may include any one of a head-up camera, a middle and long focus camera, a fisheye camera, and the like.
When the unmanned vehicle needs to check whether a certain direction is safe or not, the regulation and control module sends an opening request to the image acquisition module, wherein the opening request carries pose information. And opening the image acquisition module based on the opening request, configuring parameters of the opened image acquisition module according to the pose information, adjusting to a proper angle and starting to record a video. Meanwhile, the regulation and control module sends a starting request to the visual perception module so as to start the visual perception module. The image acquisition module processes the recorded video To obtain an image sequence, the image sequence is transmitted To the visual perception module, the visual perception module executes the safe driving prediction optimization method provided by the embodiment based on the received image sequence, the video is subjected To key object detection through a deep learning model, the visual tracking unit tracks the moving vehicle, the Collision estimation unit calculates the Collision Time (TTC) of the tracked vehicle, finally, the road safety state is determined according To the Collision Time, the safety state is returned To the regulation module, and the regulation module controls the unmanned vehicle To drive according To the safety state.
In this embodiment, the detecting the key object of the image sequence by the deep learning model mainly includes: for a received image sequence, after aligning images by a deep learning model, performing online reasoning to obtain an obj type of each object and a border (bounding box, bbox) of each object, possibly providing a plurality of bboxes for one object, removing repeated bboxes by using a Non-Maximum Suppression (NMS) algorithm, and finally outputting a detection result, wherein the detection result may include: type, bbox and confidence of each object in the image sequence.
The detection result is stored in the memory block pointed by a float pointer, the storage structure is as shown in fig. 3a, the total number of bbox, the total number of first class bbox, the total number of first two classes bbox, … …, the total number of first eight classes bbox are stored in the first 9 memory blocks, and bbox data is stored in the subsequent memory blocks once. Wherein the bbox data comprises coordinate information and confidence of the bbox.
The data storage structure shown in fig. 3b can store a 3D bbox data, wherein bbox in this embodiment is 2D bbox, and 2D in 2D bbox refers to a plane. The position of the bbox can be accurately represented by 4 data of the bbox in the plane, so that the previous 4 storage units are selected to store the coordinate information of the bbox, 5-16 storage units are set to be 0, and the last storage unit stores the confidence. Specifically, the bbox data storage structure is as shown in fig. 3b, wherein a first storage unit stores an abscissa of a first vertex of the bbox, a second storage unit stores an ordinate of the first vertex of the bbox, a third storage unit stores an abscissa of a second vertex of the bbox, a fourth storage unit stores an ordinate of the second vertex of the bbox, wherein the first vertex and the second vertex are diagonal vertices, a middle 5-16 storage unit is 0, and a last storage unit stores a confidence level.
Furthermore, the detection result output by the deep learning model is input into the visual tracking unit, the visual tracking unit judges whether the object needs to be tracked, and if the object needs to be tracked, the detection result is used for tracking the moving object. Specifically, tracking is performed by using an Intersection-over-Union (IoU) algorithm of the bbox, and a tracker (tracker) corresponding to each object is obtained.
Specifically, the visual tracking unit maintains a tracker for each object detected. For example: tracker1, tracker2, and the like. When a new detection result is received, for each tracker, an Intersection-over-Union (IoU) algorithm Of the bbox is used for calculating the value Of the interested part (ROI) Of the tracker and each bbox, and the bbox with the largest ROI value is selected as a tracking result, namely, the new tracker. Further, the portion of interest may be understood as an overlapping portion of the tracker and the bbox. As shown in fig. 4, the same object has the same id, for example: car1, car2, car3, car4, car 5. One piece of tracker information may include object (obj) information of the same object at different times, where the tracker information is stored in one memory block, and the obj information includes: coordinate information, distance, speed, etc. for each bbox.
Further, the corresponding image of each bbox and the corresponding time, occluded area will be retained in the tracker. Wherein the occluded areas are marked with roi _ set. For example: and (4) judging the background before and after the bbox, wherein the larger ordinate of the bottom edge of the bbox is the foreground, and otherwise, the ordinate is the background. As shown in FIG. 4, car3 is occluded by car2, and the tracker for car2 will remain occluded. car2 is not occluded by other objects, so the roi _ set of car2 is empty.
Fig. 5 is a schematic diagram of a data storage structure of tracker information provided in an embodiment of the present disclosure; as shown in fig. 5, obj information from time t0 to time tn is sequentially stored from the start position (Buffer-begin) of one bank block to the end position (Buffer-end) of one bank block. It should be noted that the data structure shown in fig. 5 is only obj information of an object at different time instants. The tracker information for different objects is stored in different memory blocks.
Further, the total length of the data storage structure (a rectangular box in fig. 5) of the obj information at a certain time is 45, the data type float. The first 16 spaces store the original bbox data structure, the next 6 spaces store the original lateral distance, longitudinal distance, lateral velocity, longitudinal velocity, lateral acceleration, longitudinal acceleration, the next 22 spaces store the smoothed bbox and the smoothed distance, velocity, acceleration, and the last location stores the confidence of detection (score).
As shown in fig. 1, the safe driving prediction optimization method provided by the embodiment of the present disclosure mainly includes the following steps:
and S11, acquiring the current frame and the candidate frame, and obtaining the TTC model through the pinhole imaging model according to the current frame and the candidate frame.
In this embodiment, the current frame may be understood as an image frame with a timestamp closest to the current time, or may be understood as an image frame with the latest timestamp in all image sequences. For example: the time stamps of the image sequence are respectively t0,t1,……,tn-1,tn. The picture whose time stamp is tn is the current frame. Further, the candidate frame may be understood as an image with a timestamp preceding the current frame timestamp and an optical flow vector greater than an optical flow threshold.
The time stamp is understood to be a kind of authentication of the image generation time, and the time stamp in this embodiment refers to the image frame generation time.
In one embodiment, obtaining the current frame and the candidate frame comprises: determining the image frame with the timestamp closest to the current time as the current frame; determining all image frames with the time stamp difference between the time stamp and the current frame time stamp smaller than a preset time length as a candidate frame candidate set; candidate frames are selected from the candidate frame candidate set.
Determining the image frame whose timestamp is closest to the current time as the current frame may be understood as reading timestamps of all image sequences, determining the image frame closest to the current time as the current frame, and may also be understood as determining an image having a latest time of generation among all image sequences as the current frame.
Further, difference calculation is carried out on the time stamps of all the images and the time stamp of the current frame, the difference is compared with preset time length, and all the images with the difference smaller than the preset time length are determined as candidate frame candidate sets. Candidate frames are selected from the candidate frame candidate set. The preset time length can be selected according to actual conditions, and optionally, the preset time length is 5 seconds. For example: all images within 5 seconds before the current frame timestamp are determined as a candidate frame candidate set.
In particular, the time stamps of the image sequence are respectively t0,t1,……,tn-1,tn. The image with time stamp tn is the current frame, t0,t1,……,tn-1All images corresponding to the timestamp of (a) are candidate frame candidate sets.
In the embodiment, the candidate frame candidate set is determined based on the preset duration as a standard, and then the candidate frame is selected from the candidate frame candidate set, so that the candidate frame selection range is narrowed, and the calculation time for selecting the candidate frame is shortened.
In one embodiment, selecting a candidate frame from the candidate frame candidate set comprises: sequentially selecting image frames from the candidate frame candidate set as candidate frames to be selected according to the sequence of the timestamps from back to front; extracting a first feature point of the candidate frame to be selected; determining a second feature point matched with the first feature point in the current frame; calculating an optical flow vector between the first feature point and the second feature point; and when the optical flow vector is larger than the optical flow threshold value, determining the candidate frame to be selected as a candidate frame.
In the present embodiment, a method for selecting a candidate frame is provided, and fig. 7 is a flowchart for selecting a candidate frame according to the present disclosure. As shown in fig. 7, the candidate frame selection method provided by the embodiment of the present disclosure mainly includes the following steps:
and S21, sequentially selecting image frames from the candidate frame candidate set as candidate frames to be selected according to the sequence of the time stamps from back to front.
In this embodiment, the images in the candidate frame candidate set are sequentially used as candidate frames to be selected according to the sequence of the timestamps from back to front, and if the candidate frames to be selected are determined as the candidate frames, the selection is stopped. And if the candidate frame can not be found in the candidate frame candidate set, the candidate frame is regarded as a safe state, the TTC is not calculated, and the information that the road is in the safe state is directly sent to the regulation and control module.
In particular, the slave timestamp t is required0,t1,……,tn-1And selecting candidate frames from the corresponding image sequence. The selection is from back to front, i.e. from tn-1To t0Is selected. Let the timestamp of the candidate frame be tkNeed to judge tnAnd tkThe optical flow vector of the corresponding image is used as an index for determining the frame candidate. If at the time stamp t0,t1,……,tn-1And if the candidate frame cannot be found in the corresponding image sequence, the image sequence is regarded as a safe state.
S22, extracting a first feature point of the candidate frame to be selected; and determining a second feature point matched with the first feature point in the current frame.
Wherein, the characteristic point is a pixel point after characteristic extraction and screening. The first feature point may be a feature point extracted by any one of the following detection methods: gftt (good feature To track), fast, (Features from filtered segments test), SURF (Speeded Up Robust Features), SIFT (Scale-invariant feature transform), STAR. Optionally, the first feature point is a GFTT feature point, that is, a feature point extracted by using a GFTT detection method.
The first characteristic point and the second characteristic point are pixel points of the same point in the candidate frame to be selected and the current frame respectively. For example: the first feature point is a pixel point at the upper left corner of the head of the vehicle in the candidate frame to be selected, and then the second feature point is a pixel point at the upper left corner of the head of the vehicle in the current frame.
Further, if occlusion exists between the candidate frame to be selected and the bbox in the current frame, the feature points of the overlapped part area are distributed, and forward and backward optical flow tracking and optical flow ransac algorithms are adopted to process the feature points in the candidate frame to be selected so as to expect to screen out wrong matching points and discrete feature points, so as to determine a second feature point matched with the first feature point in the current frame.
And S23, calculating an optical flow vector between the first feature point and the second feature point.
And S24, when the optical flow vector is larger than the optical flow threshold value, determining the candidate frame to be selected as a candidate frame.
Further, calculating the value of an optical flow vector between the first feature point and the second feature point, and if the value of the optical flow vector exceeds an optical flow threshold, calling an algorithm to calculate a specific TTC value; if not, the motion of the two frames of images is small, the candidate frame to be selected is not suitable to be used as the candidate frame, and whether the previous frame is the candidate frame or not is continuously judged until all the candidate frames to be selected are judged to be finished.
The optical flow threshold may be set according to actual conditions, and optionally, the optical flow threshold is set to an optical flow vector in which 5 pixels or more move.
Specifically, if the value of the optical flow vector exceeds 5 pixels, an algorithm may be invoked to calculate a specific TTC value, so as to determine the safety state of the vehicle. If not, the motion of the two frames of images is small, the candidate frame to be selected is not suitable to be used as the candidate frame, and whether the previous frame is the candidate frame or not is continuously judged until all the candidate frames to be selected are judged to be finished.
The specific steps of obtaining the TTC model through the pinhole imaging model according to the current frame and the candidate frame mainly comprise:
assume the actual physical size of the leading vehicle is at W, at time stamp tk(candidate frame generation time) preceding vehicle distance camera z at imaging width w of image0At time stamp t at a time Δ t elapsed at a speed vn(current frame generation time) moment moves to a distance z' from the camera, and the imaging width in the image is w1. The focal length of the camera is f, and the image width proportionality coefficient is S. According to the principle of the pinhole imaging model, the relation between the object distance and the width of the object in the image is obtained, as shown in formula (1).
Figure BDA0003326912900000091
Wherein,
Figure BDA0003326912900000092
and
Figure BDA0003326912900000093
the method is obtained by the principle of a pinhole imaging model, and the imaging width in the representation image is proportional to the product of the focal length of a camera and the actual size of the vehicle and inversely proportional to the actual distance from the vehicle to the camera.
Further, assuming that the vehicle and the front vehicle run at a relatively constant speed v, and the running distance t is t after the time Δ tnAt the moment, the current vehicle is away from the camera z' and the timestamp tkThe difference between the front vehicle and the camera z is expressed by the following equation (2).
z'=z+v×Δt (2)
The formula (3) can be obtained by performing simultaneous and equivalent transformation on the above formula (1) and formula (2).
Figure BDA0003326912900000101
The equation (4) can be obtained by performing equivalent transformation on the equation (3).
Figure BDA0003326912900000102
The above equations (1), (2) and (4) are subjected to simultaneous equivalent transformation to obtain equation (5).
Figure BDA0003326912900000103
Further, in an ideal case, the time to collision TTC can be understood as a ratio of a distance between the current vehicle and the preceding vehicle to a speed, that is, can be expressed by equation (6).
Figure BDA0003326912900000104
By associating the formula (5) with the formula (6), a TTC model can be obtained, which is expressed by the formula (7).
Figure BDA0003326912900000105
S12, obtaining a first TTC optimization model according to the TTC model and preset conditions; the first TTC optimization model is used for predicting safe running of the vehicle.
In this embodiment, the first TTC optimization model may be understood as a TTC model obtained by optimizing the TTC model by using a preset condition.
In one embodiment, the preset conditions include: the target object is a rigid body, the target object is not rotated in the opposite movement direction, and the depth change of the target object in the current frame and the candidate frame in a short time is ignored.
Specifically, for any two points on the preceding vehicle, it is denoted as i, j, and two times tkTime t andnthe time is imaged on the image, and the sit-ups of the i-point imaged on the image are recorded as
Figure BDA0003326912900000111
And
Figure BDA0003326912900000112
the abscissa of the j-point imaged on the image is respectively noted
Figure BDA0003326912900000113
And
Figure BDA0003326912900000114
order to
Figure BDA0003326912900000115
Further, tkTime t andnafter the time is imaged on the image, the distance change between two points can be expressed by formula (9), andequation (9) performs an equivalent transformation.
Figure BDA0003326912900000116
Wherein,
Figure BDA0003326912900000117
can represent points i and j at tnThe width of the image in the image at the moment,
Figure BDA0003326912900000118
can represent points i and j at tkThe imaging width in the image at the time, Δ width, indicates the amount of change in the two imaging widths.
After the equations (8) and (9) are combined and transformed, the method can be obtained
Δwidth=Δuj-Δui (10)
In the present embodiment, Δ width represents the amount of change of two imaging widths, w1-w0The amount of change in the two imaging widths is also indicated.
It should be noted that the line connecting the point i and the point j cannot be parallel to the horizontal direction of the image, nor can it be parallel to the vertical direction of the image.
Combining the formula (7) and the formula (10), a first TTC optimization model can be obtained, which is represented by the formula (11).
Figure BDA0003326912900000119
According to the abscissa, the specific delta t can be calculated according to the time stamps of the current frame and the candidate frame, and then the specific value of the TTC is calculated by substituting the formula, so that the prediction of the safe driving of the vehicle is realized.
The embodiment provides a safe driving prediction method comprising the following steps: the method comprises the steps of obtaining a current frame and a candidate frame, obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame, and obtaining a first TTC optimization model according to the TTC model and preset conditions, wherein the first TTC optimization model is used for predicting safe driving of a vehicle. According to the technical scheme, the collision time is determined based on the collected video frames, the farther distance can be sensed, the safety state of the intersection is given, and the driving of the vehicle is planned in advance.
In one embodiment, fig. 6 is a flowchart of a safe driving prediction method provided in an embodiment of the present disclosure, and in this embodiment, the safe driving prediction method is optimized. As shown in fig. 6, the optimized safe driving prediction method mainly includes the following steps:
and S21, acquiring the current frame and the candidate frame, and obtaining the TTC model through the pinhole imaging model according to the current frame and the candidate frame.
And S22, obtaining a first TTC optimization model according to the TTC model and preset conditions.
Steps S21 and S22 are the same as steps S11 and S12 in the above embodiments, and specific reference may be made to the description in the above embodiments, which is not repeated in this embodiment.
And S23, obtaining a second TTC optimization model based on the optical flow constraint and the first TTC optimization model.
In this embodiment, because the optical flow constraint is provided between the images acquired by the image acquisition module, the optical flow constraint is adopted to further optimize the first TTC optimization model to obtain the second optimization model, which can improve the accuracy of the TTC model in calculating the specific TTC value, thereby improving the accuracy of road safety prediction.
In one embodiment, said deriving a second TTC optimization model based on said optical flow constraints and said first TTC optimization model comprises: the optical flow constraint is expressed in the form of an optical flow constraint equation; performing Taylor expansion on the optical flow constraint equation to obtain an expanded optical flow constraint equation; and connecting the expanded optical flow constraint equation with the first TTC optimization model to obtain a second TTC optimization model.
Wherein, for two images of the current frame and the candidate frame, the optical flow constraint equation is shown as formula (12).
Figure BDA0003326912900000121
After taylor expansion of the optical flow constraint equation, equation (13) can be obtained.
Figure BDA0003326912900000122
Further, after performing equivalent transformation on the equation (11) of the first TTC optimization model, the following equations (14) and (15) can be obtained
Figure BDA0003326912900000131
Figure BDA0003326912900000132
Equations (13), (14) and (15) are subjected to simultaneous transformation to obtain equation (16).
Figure BDA0003326912900000133
After the finishing transformation is performed on the formula (16), a second TTC optimization model can be obtained, as shown in the formula (17).
Figure BDA0003326912900000134
Wherein the second TTC optimization model is used for predicting safe running of the vehicle.
In one embodiment, the solution is iteratively updated for the second TTC optimization model to obtain a solution for the second TTC optimization model.
In this embodiment, the second TTC optimization model may be iteratively updated using a non-linear optimization method, where the non-linear optimization method includes one or more of the following: steepest descent, Newton, Gaussian Newton, LM, doglegg, etc.
Further, the solution of the second TTC optimization model may be directly used as the TTC value to predict safe driving. If the solution of the second TTC optimization model is larger than a preset threshold value, judging that the vehicle is in a safe state; otherwise, the vehicle is judged to be in a dangerous state.
In one embodiment, iteratively updating the second TTC optimization model to solve includes obtaining an optimal solution for the second TTC optimization model using a gauss-newton method in a non-linear optimization.
In this embodiment, the second TTC optimization model is iteratively updated by using a gauss-newton method in the nonlinear optimization, so as to obtain an optimal solution of the second TTC optimization model. The Gauss-Newton method can improve the efficiency of obtaining the optimal solution, and the algorithm performance is good, and the realization difficulty is low.
Further, the safety state of the vehicle is judged according to the optimal solution and a preset threshold value. The preset threshold value is the shortest time that the vehicle can drive through the intersection or the time that the vehicle drives through the target planning area.
Specifically, the safety state of the vehicle is determined according to the comparison result of the optimal solution and a preset threshold value.
Further, the step of judging the safety state of the vehicle according to the optimal solution and the preset threshold value comprises: if the optimal solution is larger than the preset threshold value, judging that the vehicle is in a safe state; otherwise, judging that the vehicle is in a dangerous state; the preset threshold value is the shortest time that the vehicle can drive through the intersection or the time that the vehicle drives through the target planning area.
The preset threshold may be set according to an actual situation, and optionally, the preset threshold is greater than or equal to 10 s. Specifically, if the optimal solution is more than 10s, the vehicle is judged to be in a safe state; otherwise, the vehicle is judged to be in a dangerous state.
In one embodiment, the obtaining an optimal solution for the second TTC optimization model by using a gauss-newton method in a nonlinear optimization includes: performing LK optical flow tracking by using the characteristic points, substituting the solution of the first TTC optimization model into the LK optical flow tracking to calculate an iterative initial value; the absolute value of the difference between the initial value of the iteration and the solution of the first TTC optimization model is an updating amount, the updating amount is smaller than or equal to a set first threshold value, and the iteration updating is stopped; otherwise, the Gaussian Newton method is continuously used for iteratively updating the solution updating amount until the updating amount is smaller than or equal to the first threshold value.
In one embodiment, a method for obtaining an optimal solution for the second TTC optimization model by using a gauss-newton method in nonlinear optimization is provided, as shown in fig. 8, and mainly includes the following steps:
and S31, performing LK optical flow tracking by using the characteristic points, and substituting the solution of the first TTC optimization model into the LK optical flow tracking to calculate an iterative initial value.
In the present embodiment, it is assumed that under the condition that the light irradiation is not changed, there is the following formula (18).
I(u,v,t)=I(u+Δu,v+Δv,t+Δt) (18)
Since the illumination varies at different times in practical applications, the objective function for solving the optimal solution is formula (19).
||I(u,v,t)-I(u+Δu,v+Δv,t+Δt)||2=||e||2 (19)
And tracking the LK optical flow by using the characteristic points, bringing the solution of the first TTC optimization model into the LK optical flow, and calculating the TTC value in the second TTC optimization model as an iterative initial value.
And S32, the absolute value of the difference between the initial value of the iteration and the solution of the first TTC optimization model is an updating amount.
S33, whether the updating amount is less than or equal to the set first threshold value is judged.
The first threshold may be set according to an actual situation, and optionally, the first threshold is 1% of a solution of the first TTC optimization model.
And S34, stopping iterative updating when the updating amount is less than or equal to a set first threshold value.
And S35, if the updating amount is larger than the set first threshold value, continuously using the Gaussian Newton method to iteratively update the solved updating amount, and returning to execute the step S33.
Specifically, the iteratively updating and solving the update quantity by using the gauss-newton method includes: bringing the solution of the first TTC optimization model into a Gauss-Newton model to obtain a Gauss-Newton error value (e); the update is solved by the gauss-newton error value.
Specifically, LK optical flow tracking is carried out by utilizing the characteristic points, the solution of the first TTC optimization model is substituted into LK optical flow, the values of TTC, constu and constv in the second TTC optimization model are calculated to be used as the initial values of iteration, and the values are substituted into a formula (17) to be calculated
Figure BDA0003326912900000151
Figure BDA0003326912900000152
Will be solved to
Figure BDA0003326912900000153
And (3) the step is carried into a formula (19) to obtain a gauss-newton error value e, if the gauss-newton error value e is not reduced compared with the error value obtained by the previous iteration, the iteration is stopped, or if the gauss-newton error value e is reduced compared with the error value obtained by the previous iteration by a reduction amount smaller than a set threshold (wherein the set threshold is determined by the initial value of the iteration), the iteration is stopped, and the TTC value obtained by the iteration is used as the optimal solution of the second TTC optimization model. Otherwise, solving the updating quantity through the Gaussian Newton error value e, updating the coordinate, stopping iterative updating if the updating quantity is smaller than or equal to the first threshold value, and taking the TTC value obtained by the iteration as the optimal solution of the second TTC optimization model. And if the updating amount is larger than the first threshold value, updating the coordinates, and solving the updating amount again.
The embodiment provides a safe driving prediction method comprising the following steps: the method comprises the steps of obtaining a current frame and a candidate frame, obtaining a TTC model through an aperture imaging model according to the current frame and the candidate frame, obtaining a first TTC optimization model according to the TTC model and a preset condition, obtaining a second TTC optimization model based on an optical flow constraint and the first TTC optimization model, obtaining an optimal solution of the second TTC optimization model in a nonlinear optimization mode, and determining a road safety state according to the optimal solution. According to the technical scheme, the optimal solution of the second TTC optimization model is obtained in a nonlinear optimization mode, so that the road safety state is determined according to the optimal solution, the prediction process is closer to the actual condition of the vehicle, and the prediction accuracy is improved.
To verify the accuracy and precision of the optimized TTC model, the TTC true value needs to be obtained. In the present embodiment, two ways are provided for obtaining the true value of TTC.
In the first way, another gps equipped vehicle is used to cooperate with the acquisition. And solving the TTC value as a true value through the gps fitting speed of the two vehicles.
And adopting a second mode, constructing and repositioning a scene through a colomap algorithm, and combining the detection result to measure distance and fitting speed of the counterpart vehicle, so as to solve the TTC value as a true value. As shown in fig. 9a and 9b, the colop algorithm range finding and the gps range finding are very close and can be overlapped basically. As shown in fig. 10a, the fitting speed in the colop algorithm is very similar to the actual speed; as shown in fig. 10b, the fitting speed in the gps measurement is very similar to the actual speed.
As shown in fig. 11, TTC-gt represents TTC value obtained by mapping and repositioning a scene according to the colomap algorithm, and by combining the detection result with the range finding and fitting speed of the counterpart vehicle, the TTC value is used as a true value. TTC-animation represents the TTC value calculated by the road safety algorithm provided by the embodiment. As shown in fig. 11, the TTC value calculated by the road safety algorithm provided in this embodiment is very close to the actual TTC value in actual operation.
Fig. 12 is a structural diagram of a safe driving prediction optimizing device provided in an embodiment of the present disclosure, where the device may be applied to a case where a driving condition of an unmanned vehicle is predicted in advance, and the safe driving prediction optimizing device may be implemented in a software and/or hardware manner, and may be configured in a visual perception module in the unmanned vehicle.
As shown in fig. 12, the safe driving prediction optimizing apparatus provided in the embodiment of the present disclosure mainly includes: a TTC model determining module 121 and a first optimized TTC determining module 122.
The TTC model determining module 121 is configured to be a frame obtaining module, configured to obtain a current frame and a candidate frame, and obtain a TTC model through a pinhole imaging model according to the current frame and the candidate frame; a first optimized TTC determining module 122, configured to obtain a first TTC optimized model according to the TTC model and a preset condition; the first TTC optimization model is used for predicting safe running of the vehicle.
The present embodiment provides a safe driving prediction device, which is mainly used for executing the following steps: the method comprises the steps of obtaining a current frame and a candidate frame, obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame, and obtaining a first TTC optimization model according to the TTC model and preset conditions, wherein the first TTC optimization model is used for predicting safe driving of a vehicle. According to the technical scheme, the collision time is determined based on the collected video frames, the farther distance can be sensed, the safety state of the intersection is given, and the driving of the vehicle is planned in advance.
In one embodiment, further comprising: and the second TTC optimization model is used for obtaining a second TTC optimization model based on the optical flow constraint and the first TTC optimization model.
In one embodiment, said deriving a second TTC optimization model based on said optical flow constraints and said first TTC optimization model comprises: the optical flow constraint is expressed in the form of an optical flow constraint equation; performing Taylor expansion on the optical flow constraint equation to obtain an expanded optical flow constraint equation; and connecting the expanded optical flow constraint equation with the first TTC optimization model to obtain a second TTC optimization model.
In one embodiment, the solution is iteratively updated for the second TTC optimization model to obtain a solution for the second TTC optimization model.
In one embodiment, iteratively updating the second TTC optimization model to solve includes obtaining an optimal solution for the second TTC optimization model using a gauss-newton method in a non-linear optimization.
In one embodiment, the obtaining an optimal solution for the second TTC optimization model by using a gauss-newton method in a nonlinear optimization includes: performing LK optical flow tracking by using the characteristic points, substituting the solution of the first TTC optimization model into the LK optical flow tracking to calculate an iterative initial value; the absolute value of the difference between the initial value of the iteration and the solution of the first TTC optimization model is an updating amount, the updating amount is smaller than or equal to a set first threshold value, and the iteration updating is stopped; otherwise, the Gaussian Newton method is continuously used for iteratively updating the solution updating amount until the updating amount is smaller than or equal to the first threshold value.
In one embodiment, said iteratively updating the solution update quantity using the gauss-newton method comprises: bringing the solution of the first TTC optimization model into a Gauss-Newton model to obtain a Gauss-Newton error value; the update is solved by the gauss-newton error value.
In one embodiment, the method further comprises a preset threshold value, and the vehicle safety state is judged according to the optimal solution and the preset threshold value.
In one embodiment, determining the vehicle safety state based on the optimal solution and the preset threshold comprises: if the optimal solution is larger than the preset threshold value, judging that the vehicle is in a safe state; otherwise, judging that the vehicle is in a dangerous state; the preset threshold value is the shortest time that the vehicle can drive through the intersection or the time that the vehicle drives through the target planning area.
In one embodiment, the preset conditions include: the target object is a rigid body, the target object is not rotated in the opposite movement direction, and the depth change of the target object in the current frame and the candidate frame in a short time is ignored.
In one embodiment, the TTC model determination module comprises: a current frame determining unit for determining an image frame having a timestamp closest to a current time as a current frame; the candidate frame candidate set determining unit is used for determining all image frames with the time stamps different from the current frame time stamps by less than the preset time length as candidate frame candidate sets; and the candidate frame selecting unit is used for selecting the candidate frame from the candidate frame candidate set.
In one embodiment, the candidate frame selecting unit is specifically configured to sequentially select, according to a sequence from back to front of timestamps, image frames from the candidate frame candidate set as candidate frames to be selected; extracting a first feature point of the candidate frame to be selected; determining a second feature point matched with the first feature point in the current frame; calculating an optical flow vector between the first feature point and the second feature point; and when the optical flow vector is larger than the optical flow threshold value, determining the candidate frame to be selected as a candidate frame.
The road safety prediction optimization device in the embodiment shown in fig. 12 may be used to implement the technical solution of the above-mentioned road safety prediction method embodiment, and the implementation principle and technical effect are similar, and are not described herein again.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may specifically be a road safety prediction device. The electronic device provided by the embodiment of the disclosure can execute the processing flow provided by the road safety prediction embodiment.
As shown in fig. 13, the electronic device 13 includes: memory 131, processor 132, computer programs, and communications interface 133; wherein a computer program is stored in the memory 131 and configured to execute the road safety prediction optimization method as described above by the processor 132.
In addition, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the road safety prediction method described in the above embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The scheme 1. the safe driving prediction optimization method is characterized by comprising the following steps:
acquiring a current frame and a candidate frame, and obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame;
obtaining a first TTC optimization model according to the TTC model and preset conditions;
the first TTC optimization model is used for predicting safe running of the vehicle.
Scheme 2. the method according to scheme 1, further comprising: and an optical flow constraint, wherein a second TTC optimization model is obtained based on the optical flow constraint and the first TTC optimization model.
Scheme 3. the method of scheme 2, wherein said deriving a second TTC optimization model based on said optical flow constraints and said first TTC optimization model, comprises:
the optical flow constraint is expressed in the form of an optical flow constraint equation;
performing Taylor expansion on the optical flow constraint equation to obtain an expanded optical flow constraint equation;
and connecting the expanded optical flow constraint equation with the first TTC optimization model to obtain a second TTC optimization model.
Scheme 4. the method according to scheme 2, wherein the second TTC optimization model is iteratively updated and solved to obtain a solution of the second TTC optimization model.
Scheme 5. the method of scheme 4, wherein iteratively updating the solution for the second TTC optimization model comprises obtaining an optimal solution for the second TTC optimization model using the gauss-newton method in a non-linear optimization.
Scheme 6. the method of scheme 5, wherein obtaining an optimal solution for the second TTC optimization model using the gauss-newton method in nonlinear optimization comprises:
performing LK optical flow tracking by using the characteristic points, substituting the solution of the first TTC optimization model into the LK optical flow tracking to calculate an iterative initial value;
the absolute value of the difference between the initial value of the iteration and the solution of the first TTC optimization model is an updating amount, the updating amount is smaller than or equal to a set first threshold value, and the iteration updating is stopped; otherwise, the Gaussian Newton method is continuously used for iteratively updating the solution updating amount until the updating amount is smaller than or equal to the first threshold value.
Scheme 7. the method of scheme 6, wherein iteratively updating the solution update quantity using the gauss-newton method comprises:
bringing the solution of the first TTC optimization model into a Gauss-Newton model to obtain a Gauss-Newton error value; the update is solved by the gauss-newton error value.
The method of claim 5, further comprising setting a threshold,
and judging the safety state of the vehicle according to the optimal solution and the preset threshold value.
Scheme 9. the method of scheme 8, wherein determining a vehicle safety state based on the optimal solution and the preset threshold comprises: if the optimal solution is larger than the preset threshold value, judging that the vehicle is in a safe state; otherwise, judging that the vehicle is in a dangerous state; the preset threshold value is the shortest time that the vehicle can drive through the intersection or the time that the vehicle drives through the target planning area.
Scheme 10. the method according to scheme 1, wherein the preset conditions include: the target object is a rigid body, the target object is not rotated in the opposite movement direction, and the depth change of the target object in the current frame and the candidate frame in a short time is ignored.
Scheme 11. the method of scheme 1, wherein obtaining the current frame and the candidate frame comprises:
determining the image frame with the timestamp closest to the current time as the current frame;
determining all image frames with the time stamp difference between the time stamp and the current frame time stamp smaller than a preset time length as a candidate frame candidate set;
candidate frames are selected from the candidate frame candidate set.
Scheme 12. the method of scheme 11, wherein selecting a candidate frame from the candidate frame candidate set comprises:
sequentially selecting image frames from the candidate frame candidate set as candidate frames to be selected according to the sequence of the timestamps from back to front;
extracting a first feature point of the candidate frame to be selected;
determining a second feature point matched with the first feature point in the current frame;
calculating an optical flow vector between the first feature point and the second feature point;
and when the optical flow vector is larger than the optical flow threshold value, determining the candidate frame to be selected as a candidate frame.
A safe travel prediction optimization apparatus according to claim 13, comprising:
the TTC model determining module is used for acquiring a current frame and a candidate frame and obtaining a TTC model through a small-hole imaging model according to the current frame and the candidate frame;
the first optimized TTC determining module is used for obtaining a first TTC optimized model according to the TTC model and preset conditions;
the first TTC optimization model is used for predicting safe running of the vehicle.
An electronic device according to claim 14, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of aspects 1-12.
Scheme 15 a computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of the schemes 1-12.

Claims (10)

1. A safe-driving prediction optimization method, comprising:
acquiring a current frame and a candidate frame, and obtaining a TTC model through a small hole imaging model according to the current frame and the candidate frame;
obtaining a first TTC optimization model according to the TTC model and preset conditions;
the first TTC optimization model is used for predicting safe running of the vehicle.
2. The method of claim 1, further comprising: and an optical flow constraint, wherein a second TTC optimization model is obtained based on the optical flow constraint and the first TTC optimization model.
3. The method of claim 2 wherein said deriving a second TTC optimization model based on said optical flow constraint and said first TTC optimization model comprises:
the optical flow constraint is expressed in the form of an optical flow constraint equation;
performing Taylor expansion on the optical flow constraint equation to obtain an expanded optical flow constraint equation;
and connecting the expanded optical flow constraint equation with the first TTC optimization model to obtain a second TTC optimization model.
4. The method of claim 2 wherein the solution is iteratively updated for the second TTC optimization model to obtain a solution for the second TTC optimization model.
5. The method of claim 4 wherein iteratively updating the solution to the second TTC optimization model comprises using Gaussian Newton's method in non-linear optimization to obtain an optimal solution for the second TTC optimization model.
6. The method of claim 5, wherein obtaining an optimal solution for the second TTC optimization model using Gauss-Newton's method in non-linear optimization comprises:
performing LK optical flow tracking by using the characteristic points, substituting the solution of the first TTC optimization model into the LK optical flow tracking to calculate an iterative initial value;
the absolute value of the difference between the initial value of the iteration and the solution of the first TTC optimization model is an updating amount, the updating amount is smaller than or equal to a set first threshold value, and the iteration updating is stopped; otherwise, the Gaussian Newton method is continuously used for iteratively updating the solution updating amount until the updating amount is smaller than or equal to the first threshold value.
7. The method of claim 6, wherein iteratively updating the solution update quantity using the Gaussian Newton method comprises:
bringing the solution of the first TTC optimization model into a Gauss-Newton model to obtain a Gauss-Newton error value; the update is solved by the gauss-newton error value.
8. The method of claim 5, further comprising a preset threshold, and determining a vehicle safety state based on the optimal solution and the preset threshold.
9. The method of claim 8, wherein determining a vehicle safety state based on the optimal solution and the preset threshold comprises: if the optimal solution is larger than the preset threshold value, judging that the vehicle is in a safe state; otherwise, judging that the vehicle is in a dangerous state; the preset threshold value is the shortest time that the vehicle can drive through the intersection or the time that the vehicle drives through the target planning area.
10. The method according to claim 1, wherein the preset condition comprises: the target object is a rigid body, the target object is not rotated in the opposite movement direction, and the depth change of the target object in the current frame and the candidate frame in a short time is ignored.
CN202111265707.1A 2021-10-28 2021-10-28 Safe driving prediction optimization method, device, equipment and storage medium Pending CN113869607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111265707.1A CN113869607A (en) 2021-10-28 2021-10-28 Safe driving prediction optimization method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111265707.1A CN113869607A (en) 2021-10-28 2021-10-28 Safe driving prediction optimization method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113869607A true CN113869607A (en) 2021-12-31

Family

ID=78998339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111265707.1A Pending CN113869607A (en) 2021-10-28 2021-10-28 Safe driving prediction optimization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113869607A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116037A (en) * 2022-06-30 2022-09-27 北京旋极信息技术股份有限公司 Method, device and system for estimating distance and speed of vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103743394A (en) * 2014-01-07 2014-04-23 北京工业大学 Light-stream-based obstacle avoiding method of mobile robot
CN103854293A (en) * 2014-02-26 2014-06-11 奇瑞汽车股份有限公司 Vehicle tracking method and device based on feature point matching
CN105574552A (en) * 2014-10-09 2016-05-11 东北大学 Vehicle ranging and collision early warning method based on monocular vision
WO2019095937A1 (en) * 2017-11-16 2019-05-23 华为技术有限公司 Collision warning method and device
CN110435647A (en) * 2019-07-26 2019-11-12 大连理工大学 A kind of vehicle safety anticollision control method of the TTC based on rolling optimization parameter
CN111950483A (en) * 2020-08-18 2020-11-17 北京理工大学 Vision-based vehicle front collision prediction method
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103743394A (en) * 2014-01-07 2014-04-23 北京工业大学 Light-stream-based obstacle avoiding method of mobile robot
CN103854293A (en) * 2014-02-26 2014-06-11 奇瑞汽车股份有限公司 Vehicle tracking method and device based on feature point matching
CN105574552A (en) * 2014-10-09 2016-05-11 东北大学 Vehicle ranging and collision early warning method based on monocular vision
WO2019095937A1 (en) * 2017-11-16 2019-05-23 华为技术有限公司 Collision warning method and device
CN110435647A (en) * 2019-07-26 2019-11-12 大连理工大学 A kind of vehicle safety anticollision control method of the TTC based on rolling optimization parameter
US20210188263A1 (en) * 2019-12-23 2021-06-24 Baidu International Technology (Shenzhen) Co., Ltd. Collision detection method, and device, as well as electronic device and storage medium
CN111950483A (en) * 2020-08-18 2020-11-17 北京理工大学 Vision-based vehicle front collision prediction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NURUL NAZLI ROSLI ET AL.: "A review of graphene based transparent conducting films for use in solar photovoltaic applications", 《JOURNAL OF PUBLIC ECONOMICS》, vol. 99, no. 2019, 12 January 2019 (2019-01-12), pages 83 - 99, XP085546263, DOI: 10.1016/j.rser.2018.09.011 *
李文姣;秦勃;: "基于卡尔曼滤波器的抗遮挡车辆跟踪算法", 计算机应用, no. 2, 31 December 2012 (2012-12-31), pages 180 - 182 *
王超;赵春霞;任明武;王欢;: "基于单目视觉的车辆碰撞模型", 南京理工大学学报, no. 06, 30 December 2014 (2014-12-30), pages 37 - 42 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116037A (en) * 2022-06-30 2022-09-27 北京旋极信息技术股份有限公司 Method, device and system for estimating distance and speed of vehicle

Similar Documents

Publication Publication Date Title
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
KR101776621B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101776620B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
US8102427B2 (en) Camera egomotion estimation from an infra-red image sequence for night vision
Barth et al. Estimating the driving state of oncoming vehicles from a moving platform using stereo vision
Adarve et al. Computing occupancy grids from multiple sensors using linear opinion pools
Sucar et al. Bayesian scale estimation for monocular slam based on generic object detection for correcting scale drift
Wedel et al. Realtime depth estimation and obstacle detection from monocular video
CN107808390A (en) Estimated using the object distance of the data from single camera
CN110543807B (en) Method for verifying obstacle candidates
US20110142283A1 (en) Apparatus and method for moving object detection
US20210158544A1 (en) Method, Device and Computer-Readable Storage Medium with Instructions for Processing Sensor Data
JP2006053890A (en) Obstacle detection apparatus and method therefor
EP3070676A1 (en) A system and a method for estimation of motion
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
KR102082254B1 (en) a vehicle recognizing system
JP4631036B2 (en) Passer-by behavior analysis device, passer-by behavior analysis method, and program thereof
CN110992424B (en) Positioning method and system based on binocular vision
KR20180047149A (en) Apparatus and method for risk alarming of collision
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN113869607A (en) Safe driving prediction optimization method, device, equipment and storage medium
JP2020118575A (en) Inter-vehicle distance measurement device, error model generation device, learning model generation device, and method and program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination