[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2017067187A1 - 前车起步处理方法、装置和系统 - Google Patents

前车起步处理方法、装置和系统 Download PDF

Info

Publication number
WO2017067187A1
WO2017067187A1 PCT/CN2016/085438 CN2016085438W WO2017067187A1 WO 2017067187 A1 WO2017067187 A1 WO 2017067187A1 CN 2016085438 W CN2016085438 W CN 2016085438W WO 2017067187 A1 WO2017067187 A1 WO 2017067187A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
current vehicle
image
target
state
Prior art date
Application number
PCT/CN2016/085438
Other languages
English (en)
French (fr)
Inventor
谭伟
张富平
黄洋文
戴虎
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP16856628.9A priority Critical patent/EP3367361B1/en
Priority to US15/770,280 priority patent/US10818172B2/en
Publication of WO2017067187A1 publication Critical patent/WO2017067187A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/09626Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages where the origin of the information is within the own vehicle, e.g. a local storage device, digital map
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • B60W30/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • B60W30/17Control of distance between vehicles, e.g. keeping a distance to preceding vehicle with provision for special action when the preceding vehicle comes to a halt, e.g. stop and go
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18054Propelling the vehicle related to particular drive situations at stand still, e.g. engine in idling state
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/107Longitudinal acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/09623Systems involving the acquisition of information from passive traffic signs by means mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0965Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages responding to signals from another vehicle, e.g. emergency vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/50Magnetic or electromagnetic sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/04Vehicle stop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • B60W2520/105Longitudinal acceleration
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02NSTARTING OF COMBUSTION ENGINES; STARTING AIDS FOR SUCH ENGINES, NOT OTHERWISE PROVIDED FOR
    • F02N2200/00Parameters used for control of starting apparatus
    • F02N2200/08Parameters used for control of starting apparatus said parameters being related to the vehicle or its components
    • F02N2200/0801Vehicle speed
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02NSTARTING OF COMBUSTION ENGINES; STARTING AIDS FOR SUCH ENGINES, NOT OTHERWISE PROVIDED FOR
    • F02N2200/00Parameters used for control of starting apparatus
    • F02N2200/12Parameters used for control of starting apparatus said parameters being related to the vehicle exterior
    • F02N2200/125Information about other vehicles, traffic lights or traffic congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the present application relates to the field of driving assistance systems, and in particular to a method, device and system for processing a preceding vehicle.
  • the embodiment of the present application provides a method, a device and a system for processing a preceding vehicle to solve at least the technical problem that the prior art cannot accurately perform the preceding vehicle starting reminding during the driving process.
  • a processing method for a front vehicle start is provided, which is applied to a front vehicle starting processing system, and the processing system includes: a video sensor and a gravity sensor, the processing method includes: collecting a current vehicle by using a video sensor The image of the front of the front of the vehicle, and the acceleration information of the current vehicle is collected by the gravity sensor; the driving state of the current vehicle is determined based on the image features and/or the acceleration information of the image of the front of the vehicle head; and the image of the front of the vehicle is obtained when the driving state of the current vehicle is the stationary state The trajectory of the target object; whether it is based on the trajectory of the target object A reminder message to remind the current vehicle to start.
  • a processing device for starting a vehicle which is applied to a processing system for starting a vehicle, the processing system comprising: a video sensor and a gravity sensor, the processing device comprising: an acquisition unit, The video sensor collects an image of the front of the front of the current vehicle, and acquires acceleration information of the current vehicle by using a gravity sensor; and a determining unit configured to determine a running state of the current vehicle based on image features and/or acceleration information of the image of the front of the front; an acquiring unit, configured to: When the current running state of the vehicle is a stationary state, a motion trajectory of the target object in the image in front of the vehicle head is acquired; and a generating unit is configured to determine, according to the motion trajectory of the target object, whether to generate reminder information for reminding the current vehicle to start.
  • a processing system for starting a vehicle comprising: a video sensor mounted on a front windshield of a current vehicle, a video sensor and a rear view mirror of the current vehicle Located on the same horizontal line for capturing the front image of the current vehicle; gravity sensor for acquiring acceleration information of the current vehicle; processor, connected to the camera and the gravity sensor for image feature and/or acceleration information based on the image of the front of the vehicle head
  • the running state of the current vehicle is determined.
  • the running state of the current vehicle is the stationary state, the motion track of the target object in the image in front of the front of the vehicle is acquired, and whether the reminding information for reminding the current vehicle to start is generated according to the motion track of the target object.
  • an application for performing a processing method of the preceding vehicle start-up at runtime is further provided.
  • a storage medium for storing an application for executing a processing method of the preceding vehicle start-up at runtime.
  • the driving state of the current vehicle is determined based on the image feature of the image of the front of the vehicle head acquired by the video sensor and/or the acceleration information collected by the gravity sensor, and the front of the vehicle head is acquired when the running state of the current vehicle is the stationary state.
  • the motion track of the target object (including the preceding vehicle) in the image determines whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the current running state of the vehicle is a stationary state
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the technical problem that the prior art cannot accurately perform the front vehicle starting reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • FIG. 1 is a flow chart of a method for processing a preceding vehicle start according to an embodiment of the present application
  • FIG. 2 is a flow chart of an optional method for starting a vehicle in accordance with an embodiment of the present application
  • FIG. 3 is a flow chart of another optional method of starting a vehicle in accordance with an embodiment of the present application.
  • FIG. 4 is a flow chart of a third alternative method of starting a vehicle in accordance with an embodiment of the present application.
  • FIG. 5 is a flow chart of a fourth alternative method of starting a vehicle in accordance with an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a processing device for starting a vehicle according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a processing system for starting a vehicle according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an optional front vehicle handling system in accordance with an embodiment of the present application.
  • FIG. 9 is a schematic illustration of another alternative forward vehicle handling system in accordance with an embodiment of the present application.
  • ADAS Advanced Driver Assistant System
  • Monocular Vision Use one camera to capture scene information and perform intelligent analysis.
  • Line Segment Detector A straight line segment detection algorithm that achieves sub-pixel accuracy in linear time.
  • Kalman filter Using the linear system state equation, the system state is used to estimate the system state optimally.
  • G-sensor A sensor that senses the change in acceleration.
  • CAN bus Short for Controller Area Network (CAN), ISO international standard serial communication protocol, is one of the most widely used fieldbuses in the world.
  • Vehicle Tail The tail of the vehicle that is confirmed by the detection algorithm and the tracking algorithm in the image. It is represented by a rectangular box. In this paper, it is the same concept as the vehicle and the target.
  • Preceding Vehicle The only vehicle in the image that is identified in front of the front of the current vehicle. This article is the same concept as the previous vehicle.
  • Histogram of Oriented Gradient A feature descriptor used for object detection in computer vision and image processing. The feature is constructed by calculating and counting the gradient direction histogram of the local region of the image.
  • Adaboost An iterative algorithm, the core idea is to train different weak classifiers for the same training set, and then combine these weak classifiers to form a strong classifier.
  • a method embodiment of a method of processing a preceding vehicle is provided, and it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions. And, although the logical order is shown in the flowchart, but in some In this case, the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of a method for processing a front vehicle starting according to an embodiment of the present application.
  • the processing method of the front vehicle starting is applied to a front vehicle starting processing system, and the processing system includes: a video sensor and The gravity sensor, the processing method can include the following steps:
  • Step S102 collecting a front image of the front of the current vehicle through the video sensor, and collecting acceleration information of the current vehicle by using the gravity sensor.
  • Step S104 determining a running state of the current vehicle based on image features and/or acceleration information of the image of the front of the vehicle head.
  • the current state of the vehicle may be a stationary state or a motion state.
  • the stationary state refers to the current vehicle being stationary relative to the road surface, such as waiting for a traffic light or a traffic jam;
  • the motion state refers to the current vehicle relative to the road surface motion, including acceleration, deceleration, and uniform motion.
  • Step S106 when the running state of the current vehicle is a stationary state, the motion trajectory of the target object in the image in front of the vehicle head is acquired.
  • the target object in the image in front of the front head may be one or more.
  • Step S108 determining whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the driving state of the current vehicle is determined based on the image feature of the image of the front of the vehicle head acquired by the video sensor and/or the acceleration information collected by the gravity sensor, and the image of the front of the vehicle is acquired when the driving state of the current vehicle is the stationary state.
  • the motion track of the target object (including the preceding vehicle) determines whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the running state of the current vehicle is a stationary state
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the problem that the prior art cannot accurately perform the pre-car start reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the continuous image in front of the front of the current vehicle can be captured in real time by the camera mounted on the rear windshield of the current vehicle rear view mirror to acquire the front image of the front of the current vehicle.
  • the gravity sensor acquires acceleration information of the current vehicle in real time, and the acceleration information includes the current vehicle.
  • the acceleration values of the three different directions in the world coordinate system wherein the three different directions are respectively the running direction of the vehicle, perpendicular to the running direction of the vehicle, parallel to the road surface, and perpendicular to the road surface direction.
  • the camera in the above embodiment may be one, and a continuous image in front of the front of the current vehicle is captured by a camera to obtain an image of the front of the front of the current vehicle, and the gravity sensor acquires acceleration information of the current vehicle. Determining a running state of the current vehicle based on image features and/or acceleration information of the image of the front of the vehicle head, and if the determined driving state of the current vehicle is a stationary state, acquiring a motion track of one or more target objects in the image of the front of the vehicle head, based on the The motion track of the one or more target objects determines whether the front vehicle of the current vehicle has exited, and if the front vehicle of the current vehicle has exited and the vehicle is still at a standstill, a reminder message is generated.
  • a camera is mounted on the windshield behind the rear view mirror of the currently traveling vehicle. If the current vehicle travels to the traffic intersection, the traffic light of the traffic intersection is red, the current vehicle stops, and the processing method is started. After the algorithm is started, a continuous multi-frame picture of the front scene of the current vehicle is captured by the camera, and acceleration information of the current vehicle in three different directions is acquired by the gravity sensor, and image features and/or acceleration information of the image in front of the head is obtained.
  • the acceleration value determines that the current driving state of the vehicle is a stationary state, and tracks the motion trajectory of the target object in the image in front of the front of the vehicle. If the moving trajectory of the target object determines that the previous vehicle of the current vehicle is driving out, it is generated to remind the current vehicle to start.
  • the reminder information is outputted by the prompting device to remind the current vehicle to start in time.
  • determining the traveling state of the current vehicle based on the image feature and/or the acceleration information of the image of the front of the vehicle includes: the frame difference foreground based on the image of the front of the N-frame of the continuous N frame when the traveling state of the current vehicle is the initial state
  • the ratio and/or the acceleration information of the current vehicle determines that the current vehicle's driving state is a stationary state or a moving state; when the current vehicle's traveling state is not the initial state, the frame difference foreground ratio and/or frame based on the image of the front of the continuous N frame of the front of the vehicle
  • the difference foreground dispersion and the acceleration information of the current vehicle determine that the current vehicle running state is a stationary state or a motion state
  • the image features of the image in front of the front of the vehicle include a frame difference foreground ratio and a frame difference foreground dispersion.
  • the image features of the image in front of the front include the frame difference foreground ratio and the frame difference foreground dispersion.
  • the frame difference foreground ratio refers to the ratio of the number of scenic spots before the frame difference to the number of image pixels, which is used to measure the relative motion of the scene with the vehicle.
  • the target ratio; frame difference foreground dispersion refers to the ratio of the foreground block area to the number of image pixels, which is used to measure the degree of dispersion of the foreground and distinguish the frame difference caused by the stationary object and the moving target.
  • the foreground block is a rectangular block obtained by etching, expanding, and connecting domains in front of the frame difference.
  • the current vehicle and the stationary objects in the current scene such as roads, buildings, railings, etc.
  • moving objects in the current scene such as vehicles, pedestrians, etc.
  • the frame difference can be obtained after the image frame difference algorithm is performed.
  • the vehicle in the moving state has different frame difference characteristics corresponding to the vehicle in the stationary state.
  • the frame difference is mainly caused by the relative motion of the vehicle and the stationary object in the scene.
  • the frame difference foreground ratio is large, and the frame difference foreground dispersion is also large.
  • the frame difference is mainly caused by the relative motion of the vehicle with other vehicles or pedestrians, so the frame difference foreground ratio and foreground dispersion are small. Therefore, it is possible to determine that the current traveling state of the vehicle is a stationary state or a moving state based on the frame difference foreground ratio and/or the frame difference foreground dispersion of the image of the front of the continuous N frame and in conjunction with the acceleration information of the current vehicle.
  • the acceleration information acquired by the gravity sensor in the above embodiment includes an acceleration value, which can accurately reflect whether the current vehicle is in an acceleration/deceleration state, and the acceleration value is large when the current vehicle starts or stops, and the acceleration value is obtained when the vehicle is moving at a constant speed or at rest. Smaller.
  • the frame difference foreground dispersion characteristic of the vehicle front scene is used when determining the current traveling state of the vehicle, and the interference of the bypass vehicle to determine the current vehicle running state can be removed.
  • the traveling state of the current vehicle is determined based on the image features and/or acceleration information of the front image of the front of the vehicle, thereby achieving accurate determination of the current traveling state of the vehicle.
  • determining the running state of the current vehicle as the stationary state or the moving state based on the frame difference foreground ratio of the image of the front of the N-frame front head and/or the acceleration information of the current vehicle includes: if continuous If the frame difference foreground ratio of the image of the front of the N frame is greater than the preset ratio threshold, or the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the N frame is greater than the preset acceleration threshold, it is determined that the current driving state of the vehicle is Movement state, wherein, the acceleration information includes an acceleration value; if the frame difference of the image of the front of the front frame of the continuous N frame is not greater than a preset ratio threshold, and the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is not greater than Setting the acceleration threshold determines that the current vehicle's driving state is stationary.
  • the acceleration value is acceleration information collected by a gravity sensor.
  • the traveling state of the current vehicle is determined to be a stationary state based on the frame difference foreground ratio of the N-frame front image and/or the frame difference foreground dispersion and the acceleration information of the current vehicle.
  • the motion state includes: if the current running state of the vehicle is the motion state, if the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is not greater than a preset acceleration threshold, and the N-frame of the continuous N-frame If the foreground difference of the front image is not greater than the preset ratio threshold, it is determined that the current vehicle's driving state is changed to a stationary state; or, if the continuous N frame of the front image of the front of each frame, the current vehicle's acceleration value corresponding to each of the frontal images of the front of the vehicle is not greater than If the preset acceleration threshold and the frame difference foreground dispersion of the image of the front of the N-frame are not greater than the preset dispersion threshold, it is determined that the current driving state of the vehicle is changed to a stationary state; otherwise, the driving state of the current vehicle is determined to be a moving state.
  • the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is greater than the preset acceleration threshold, it is determined that the driving state of the current vehicle is changed to the moving state. Otherwise, it is determined that the current driving state of the vehicle is a stationary state.
  • the preset acceleration threshold may be represented by T g
  • the preset proportional threshold may be represented by T f
  • the preset dispersion threshold may be represented by T s .
  • the embodiment obtains a continuous multi-frame picture in front of the front of the current vehicle by a video sensor (such as the above-mentioned camera), obtains image information in front of the front of the current vehicle, and the gravity sensor acquires acceleration information of the current vehicle, based on the The image information and acceleration information in front of the front of the vehicle head determine the driving state of the current vehicle.
  • a video sensor such as the above-mentioned camera
  • the gravity sensor acquires acceleration information of the current vehicle, based on the The image information and acceleration information in front of the front of the vehicle head determine the driving state of the current vehicle.
  • the state of the current vehicle can be determined by the state machine model.
  • the state machine model includes three states: an initial state, a motion state, and a stationary state.
  • the initial state is that when the system starts, because the number of frames of the image in front of the captured head does not reach the minimum frame number threshold set by the system, the system cannot determine the current driving state of the vehicle, and at this time, the driving state of the current vehicle is set to the initial state. status.
  • the state of motion refers to the state of the current speed of the vehicle relative to the road surface, including Speed motion, deceleration motion and uniform motion.
  • the stationary state refers to the current vehicle being stationary relative to the road surface, such as waiting for traffic lights or traffic jams.
  • the above system refers to a system for realizing the processing method of the preceding vehicle start.
  • the state machine model is as shown in FIG. 2, and the switching between the three states of the state machine model can be realized by using the image features of the front image of the front of the vehicle and the acceleration information acquired by the gravity sensor.
  • the current driving state of the vehicle is the initial state.
  • the current vehicle's driving state is determined based on the frame difference foreground ratio of the front N-frame front image of the current vehicle and/or the acceleration information of the current vehicle.
  • the stationary state or the motion state ie, the traveling state of the current vehicle is determined by Rule 1 or Rule 2 in FIG. 2).
  • the acceleration value of the current vehicle corresponding to the front image of each frame exceeds the preset acceleration threshold T g or the frame ratio of the current vehicle corresponding to the front image of each frame in the N frame current image of the front of the vehicle head, If it is greater than the preset ratio threshold T f , then it is determined that the current running state of the vehicle is the motion state, that is, rule 1 in FIG. 2 .
  • the acceleration value of the current vehicle corresponding to the image of the front of each frame is less than the preset acceleration threshold T g , and the foreground ratio of the current vehicle corresponding to the image of the front of each frame is smaller than the pre-preview
  • the proportional threshold Tf is set , then it is determined that the current running state of the vehicle is a stationary state, that is, rule 2 in FIG.
  • the current driving state of the current vehicle may be determined according to rules 3 and 4 shown in FIG. 2, specifically:
  • the state machine model can be used to accurately determine the current running state of the vehicle.
  • acquiring the motion trajectory of the target object in the image of the front of the vehicle includes: detecting the tail of the image in front of the front of the vehicle, and obtaining one or more of the images in front of the front of the vehicle.
  • performing tail-end detection on the image in front of the vehicle head includes: performing vehicle tail detection on the image in front of the vehicle head by using different tail-end detection models in different detection time periods.
  • determining, according to the motion track of the target object, whether to generate the reminder information for reminding the current vehicle to start includes: determining whether the previous vehicle of the current vehicle is moving based on the motion track of each target object; if the current vehicle of the current vehicle occurs The motion generates reminder information for reminding the current vehicle to start, wherein the previous vehicle is a vehicle that is traveling in the same lane as the current vehicle and is in front of the current vehicle head.
  • Step S301 Acquire an image of the front of the front of the current vehicle through the video sensor.
  • Step S302 Acquire acceleration information of the current vehicle by using a gravity sensor.
  • a continuous camera in front of the front of the current vehicle can be captured by a camera to obtain an image of the front of the front of the current vehicle, and the gravity sensor acquires acceleration information of the current vehicle in three different directions in real time.
  • Step S303 Determine whether the running state of the current vehicle is a stationary state based on image features and/or acceleration information of the image of the front of the vehicle head.
  • step S304 is performed; if it is determined that the current running state of the vehicle is not a stationary state, the reminding algorithm is ended.
  • Step S304 Perform tail detection on the image in front of the vehicle head to obtain one or more target objects in the image in front of the vehicle head.
  • tail detection is performed on the front image of the front frame of the current frame of the current vehicle collected by one camera, and the vehicle tail detection is turned off when the current vehicle running state is the motion state.
  • the tail-end detection adopts an offline learning mode, and respectively trains two tail-end models of two detection time periods (such as two detection time periods of day and night), and the tail-end model extracts the HOG feature of the tail-end sample. Trained with the Adaboost algorithm.
  • the daytime tail model obtained by training is used during the day, and at night, the switch is adaptively switched to the night tail model, which can realize different models at different time periods, ensuring the processing method of the front vehicle at night. Availability.
  • the image in front of the vehicle head is downsampled by a specific scale factor to obtain a downsampled image (optionally, the downsampled pattern generates a thumbnail of the corresponding image to improve the processing speed of the system),
  • a downsampled image is scanned by a sliding window method, and the matching degree between each sliding window image and the trained tail model is calculated, and a window with a high matching degree is output as a target object, and one or more target objects in the image in front of the front of the vehicle are obtained.
  • the specific scale factor can be preset.
  • Step S305 Perform target tracking on the detected target objects to obtain motion trajectories of the respective target objects.
  • the target tracking adopts the color-based target tracking algorithm ColorCSK, which can track multiple target objects at the same time.
  • ColorCSK algorithm uses color as the image feature and uses the frequency domain to achieve fast correlation matching.
  • the algorithm includes two parts: modeling and matching tracking. When modeling, the tail position of the target object is used as the training sample, and the frequency response is updated according to the expected response. Target model parameters; when matching tracking, the existing vehicle tail model is used to correlate the target object, and the highest position of the response value is found as the track position output of the target object of the current frame image.
  • the ColorCSK algorithm since the size of the image formed in the image is different due to the distance between the same target object and the current vehicle, the ColorCSK algorithm has scale adaptability, and can maintain multiple tracking models of different sizes to Achieve continuous stability with the same target object (eg, car) Trace, get the tracking trajectory of each target object.
  • the same target object eg, car
  • the target tracking is stopped.
  • Step S306 detecting a lane line in the image in front of the head.
  • Step S307 determining whether the previous vehicle of the current vehicle is moving based on the detection result of the lane line and the motion trajectory of the target object.
  • the target object Based on the relative position of the target object and the lane line, it is determined whether the target object is a bypass target, and the false reminder to the current vehicle caused by the start of the bypass vehicle is removed.
  • step S308 is performed; if the previous vehicle of the current vehicle does not move, the reminding process is ended.
  • Step S308 Generate reminder information for reminding the current vehicle to start.
  • determining whether the previous vehicle of the current vehicle is moving based on the motion trajectory of each target object may include: determining a running direction of each target object based on a motion trajectory of each target object; utilizing a running direction of the target object and The length of the motion trajectory determines whether the target object is a candidate target that is moving, and obtains a candidate target queue; and determines whether each candidate target in the candidate target queue is a bypass target, wherein the bypass target is a target that is traveling in a different lane with the current vehicle.
  • the bypass target is deleted from the candidate target queue to obtain an updated candidate target queue; and it is determined whether the candidate target in the updated candidate target queue is the previous vehicle of the current vehicle; If the candidate target in the candidate target queue is the previous vehicle of the current vehicle, it is determined that the previous vehicle of the current vehicle is in motion.
  • the running state of the current vehicle is a stationary state
  • motion trajectories of one or more target objects are obtained by tail detection and target tracking
  • the running direction of each target object is determined based on the motion trajectory of each target object (turning or straightening)
  • the preset judgment condition corresponding to the running direction to determine whether the target object is a candidate target that is moving, and obtaining a candidate target queue. Determining whether each candidate target in the candidate target queue is in the same lane as the current vehicle. If the candidate target and the current vehicle are traveling in different lanes, the candidate target is a bypass target, and the candidate target is deleted from the candidate target queue to obtain an update.
  • the previous vehicle refers to the vehicle that is in the same lane as the current vehicle and is in front of the current vehicle head, if Current The preceding vehicle of the vehicle determines that the previous vehicle of the current vehicle is in motion, and generates reminder information for reminding the current vehicle to start.
  • whether the previous vehicle of the current vehicle is moving is determined based on the motion track of each target object, and if the previous vehicle of the current vehicle is moving, generating reminder information for reminding the current vehicle to start, to remind the current vehicle to promptly action.
  • determining a running direction of each target object based on a motion trajectory of each target object includes: determining a running direction of the target object as a turn if the curvature of the fitting curve of the motion track of the target object is greater than a preset curvature threshold; If the curvature of the fitted curve of the motion trajectory is not greater than the preset curvature threshold, it is determined that the running direction of the target object is straight.
  • determining whether the target object is a moving candidate target by using the running direction of the target object and the length of the motion trajectory includes: if the running direction of the target object is a turn, and the running track length of the target object is greater than the length a predetermined length threshold, determining that the target object is a candidate target that is moving; if the running direction of the target object is straight, and the length of the running track of the target object is greater than a second preset length threshold, determining that the target object is moving Candidate target.
  • the preset curvature threshold may be represented by T
  • the first preset length threshold may be represented by L1
  • the second preset length threshold may be represented by L2.
  • the motion trajectory is updated for a detection tracking result update of one or more target objects obtained based on tail detection and target tracking for each frame of image.
  • the running direction of each target object is determined by the motion trajectory of each target object. If the curvature of the target object's motion trajectory curve is greater than the preset curvature threshold T, it is determined that the target object's running direction is a turn; if the target object's motion trajectory curve has a curvature less than T, it is determined that the target object's running direction is straight.
  • the target object For the target object whose motion direction is a turn, if the corresponding motion track length is greater than the first preset length threshold L1 (in pixels), the target object is determined to be a candidate target that is moving, and the target object is added to the candidate target queue; For a target object whose motion direction is straight, if the corresponding motion track length is greater than the second preset length threshold L2 (in pixels), it is determined that the target object is a candidate target that is moving, and the target object is added to the candidate target queue. If the trajectory is short and there are no candidate targets that satisfy the condition, then jump out directly and enter the update of the motion trajectory of the next frame.
  • L1 in pixels
  • Step S401 Acquire a motion track of the target object.
  • Step S402 Determine a running direction of each target object based on motion trajectories of the respective target objects.
  • step S403 is performed; if the running direction of the target object is straight, step S404 is performed.
  • the curvature of the fitting curve of the motion track of the target object is greater than the preset curvature threshold, determining that the running direction of the target object is a turn; if the curvature of the fitting curve of the motion track of the target object is not greater than the preset curvature threshold, determining The target object runs in a straight line.
  • Step S403 Determine whether the length of the running track of the target object is greater than a first preset length threshold.
  • the target object is a candidate target that is moving.
  • Step S404 Determine whether the length of the running track of the target object is greater than a second preset length threshold.
  • the target object is a candidate target that is moving.
  • Step S405 Add the target object to the candidate target queue.
  • the target object determined to be the candidate target of the motion is added to the candidate target queue.
  • Step S406 Determine whether the candidate target in the candidate target queue is a bypass target.
  • the bypass target is a target that is driving in a different lane from the current vehicle.
  • step S409 is performed: deleting the bypass target from the candidate target queue to obtain the updated candidate target queue; if the candidate target is not the bypass target, step S407 is performed.
  • Step S407 It is determined whether the candidate target is the previous vehicle of the current vehicle.
  • Step S408 Alarm.
  • the bypass target includes: detecting a lane line in the image in front of the head; if the lane line is detected, determining whether the candidate target is in the same lane as the current vehicle, and if the candidate target is in the same lane as the current vehicle, the candidate target is not If the lane line is not detected, it is determined whether the candidate target satisfies the trajectory of the bypass vehicle based on the position of the candidate target in the image in front of the vehicle head and the motion trajectory of the candidate target, and if the candidate target satisfies the trajectory of the bypass vehicle, then the determination is made.
  • the candidate target is a bypass target; if the candidate target does not satisfy the bypass vehicle trajectory, it is determined that the candidate target is not a bypass target.
  • the lane line detection is used to determine whether the candidate target is in the same lane as the current vehicle, to filter the false trigger caused by the bypass vehicle, and the lane line detection module includes line segment detection, line segment merging, lane line screening, and lane line tracking. Steps in which the line segment detection uses the LSD algorithm.
  • the lane line in the image in front of the vehicle head is detected, and in the case that the effective lane line is detected and the effective lane line is not detected, it is determined whether each candidate target in the candidate target queue is a bypass target, and the candidate is The bypass target is deleted from the target queue, and the updated candidate target queue is obtained to filter the false alarm of the current vehicle when the bypass vehicle starts.
  • the determination process of the bypass vehicle of the above embodiment of the present application is described in detail below with reference to FIG. 5.
  • the determination process of the bypass vehicle shown in FIG. 5 is implemented by the following steps:
  • Step S501 Detect whether the lane line in the image in front of the head is valid.
  • step S502 is performed; if no valid lane line is detected, step S503 is performed.
  • Step S502 determining whether the candidate target is within the valid lane line.
  • step S504 is performed to determine the candidate target-owner vehicle; if the candidate target is not in the same lane as the current vehicle, step S505 is performed: The candidate target is determined to be a bypass vehicle.
  • the lane vehicle refers to a candidate target in the same lane as the current vehicle
  • the bypass vehicle refers to a candidate target that is not in the same lane as the current vehicle.
  • Step S503 determining whether the candidate target satisfies the side track motion trajectory.
  • whether the candidate target satisfies the bypass vehicle motion trajectory is determined based on the position of the candidate target in the front image of the head and the motion trajectory of the candidate target.
  • step S505 is performed; if the candidate target is not satisfied If the vehicle track of the footway is moving, step S504 is performed.
  • determining whether the candidate target in the updated candidate target queue is the previous vehicle of the current vehicle includes: passing the initial position of all candidate targets in the updated candidate target queue and between each candidate target The relative position determines whether the candidate target is the previous vehicle.
  • determining whether the candidate target is the previous vehicle by the initial position of all candidate targets in the updated candidate target queue and the relative position between the respective candidate targets may include: passing all candidate targets in the updated candidate target queue The initial position between the initial position and each candidate target determines that the candidate target that is the shortest from the midpoint of the lower edge of the image in front of the front of the vehicle is the previous vehicle.
  • the shortest candidate target which is the previous vehicle of the current vehicle, determines that the previous vehicle of the current vehicle has an action, generates reminder information for reminding the current vehicle to start, to remind the current vehicle to operate in time. If the valid candidate target is not obtained by the analysis, if there is no previous vehicle in which the motion occurs in the candidate target queue, then the update of the motion trajectory of the next frame is entered.
  • the voice or image may be output by the alarm device to alert the driver of the current vehicle and/or to alert the driver by changes in light, such as changing ambient lights in the car at night.
  • the candidate target in the updated candidate target queue is the previous vehicle of the current vehicle, and when the candidate target is the previous vehicle of the current vehicle, the previous vehicle of the current vehicle is determined.
  • An action occurs, generating a reminder message for reminding the current vehicle to start, to remind the current vehicle to act in time.
  • the embodiment of the present application is a processing method of a front vehicle starting based on a gravity sensor and a monocular vision
  • the monocular vision refers to that the front scene information of the current vehicle can be acquired by a camera and intelligently analyzed.
  • the state machine model is adopted, and the driving state of the current vehicle can be accurately determined through multi-feature fusion.
  • different tail detection models are used to detect the tail image of each front of the front frame in different detection time periods, and one or more target objects in the front image of the front head are obtained.
  • the false alarm for the current vehicle is started by filtering the bypass vehicle. After removing the bypass vehicle, it is determined whether there is a previous vehicle of the current vehicle among the remaining target objects, and if it is determined that there is a current The vehicle in the previous vehicle generates a reminder message to remind the current vehicle to operate in time.
  • the effective lane line if the effective lane line is not detected, it is determined whether the candidate target satisfies the bypass vehicle motion trajectory based on the position of the candidate target in the front image of the head and the motion trajectory of the candidate target, and if the candidate target satisfies the bypass path.
  • the vehicle motion trajectory determines that the candidate target is a bypass vehicle and deletes the candidate target.
  • the driving state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle head and the acceleration information.
  • the front of the current vehicle is determined based on the motion trajectory of the target object in the image of the front of the vehicle head. Whether the car is activated to determine whether to generate a reminder message.
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the problem that the prior art cannot accurately perform the pre-car start reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
  • a processing apparatus for a front vehicle starting processing method which is applied to a front vehicle starting processing system, the processing system comprising: a video sensor and a gravity sensor, as shown in FIG.
  • the method includes an acquisition module 20, a determination module 40, an acquisition module 60, and a generation module 80.
  • the collecting unit 20 is configured to collect an image of the front of the front of the current vehicle through the video sensor, and collect acceleration information of the current vehicle by using the gravity sensor.
  • the determining unit 40 is configured to determine a driving state of the current vehicle based on image features and/or acceleration information of the image of the front of the vehicle head.
  • the current state of the vehicle may be a stationary state or a motion state.
  • the stationary state refers to the current vehicle being stationary relative to the road surface, such as waiting for a traffic light or a traffic jam;
  • the motion state refers to the current vehicle relative to the road surface motion, including acceleration, deceleration, and uniform motion.
  • the obtaining unit 60 is configured to acquire a motion trajectory of the target object in the image in front of the vehicle head when the running state of the current vehicle is a stationary state.
  • the target object in the image in front of the front head may be one or more.
  • the generating unit 80 is configured to determine, according to the motion track of the target object, whether to generate reminder information for reminding the current vehicle to start.
  • the driving state of the current vehicle is determined based on the image feature of the image of the front of the vehicle head acquired by the video sensor and/or the acceleration information collected by the gravity sensor, and the image of the front of the vehicle is acquired when the driving state of the current vehicle is the stationary state.
  • the motion track of the target object (including the preceding vehicle) determines whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the driving state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle head and the acceleration information.
  • the front of the current vehicle is determined based on the motion trajectory of the target object in the image of the front of the vehicle head.
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the problem that the prior art cannot accurately perform the pre-car start reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the camera on the rear windshield of the current vehicle rearview mirror can capture the continuous picture in front of the front of the current vehicle in real time to obtain the front image of the front of the current vehicle, and the gravity sensor obtains the acceleration information of the current vehicle in real time.
  • the acceleration information includes the current car The acceleration values of the three different directions in the world coordinate system.
  • the three different directions are respectively the running direction of the vehicle, perpendicular to the running direction of the vehicle, and parallel to the road surface, perpendicular to the road surface direction.
  • the camera in the above embodiment may be one, and a continuous multi-frame image in front of the front of the current vehicle is captured by a camera to obtain an image of the front of the front of the current vehicle, and the gravity sensor acquires acceleration information of the current vehicle, based on The image feature and/or the acceleration information of the image in front of the vehicle head determines the running state of the current vehicle, and if the determined driving state of the current vehicle is the stationary state, acquiring the motion track of one or more target objects in the image in front of the head, based on the one Or the motion track of the plurality of target objects determines whether the front vehicle of the current vehicle has exited, and if the front vehicle of the current vehicle has exited and the vehicle is still in a stationary state, a reminder message is generated.
  • the determining unit includes: a first determining module, configured to adjust a frame difference foreground ratio of the N-frame front image and/or acceleration information of the current vehicle when the current vehicle running state is an initial state Determining that the current running state of the vehicle is a stationary state or a moving state; and a second determining module, configured to predict a frame difference foreground ratio and/or a frame difference foreground based on the image of the front of the continuous N frame when the current traveling state of the vehicle is not the initial state
  • the dispersion and the acceleration information of the current vehicle determine that the current driving state of the vehicle is a stationary state or a motion state
  • the image features of the image in front of the front of the vehicle include the frame difference foreground ratio and the frame difference foreground dispersion.
  • the frame difference foreground ratio refers to the ratio of the number of scenic spots before the frame difference to the number of image pixels; the frame difference foreground dispersion refers to the foreground block area.
  • the ratio of the number of pixels in the image is used to measure the degree of dispersion of the foreground.
  • the foreground block is a rectangular block obtained by etching, expanding, and connecting domains in front of the frame difference.
  • the acceleration information acquired by the gravity sensor in the above embodiment includes an acceleration value, which can accurately reflect whether the current vehicle is in an acceleration/deceleration state, and the acceleration value is large when the current vehicle starts or stops, and the acceleration value is obtained when the vehicle is moving at a constant speed or at rest. Smaller.
  • the parameter of the frame difference foreground dispersion of the vehicle front scene is used in determining the running state of the current vehicle, and the interference of the bypass vehicle to determine the current vehicle running state can be removed.
  • the traveling state of the current vehicle is determined based on the image features and/or acceleration information of the front image of the front of the vehicle, thereby achieving accurate determination of the current traveling state of the vehicle.
  • the frame difference foreground ratio of the image and/or the acceleration information of the current vehicle determines that the current vehicle running state is a stationary state or a motion state, including: if the frame difference foreground ratio of the image of the front face of the continuous N frame is greater than a preset ratio threshold, or a continuous N frame If the acceleration value of the current vehicle corresponding to the front image of each frame in the image in front of the front of the vehicle is greater than the preset acceleration threshold, determining that the current driving state of the vehicle is a motion state, wherein the acceleration information includes an acceleration value; if the frame of the image of the front of the N frame is continuous If the difference of the foreground ratio is not greater than the preset ratio threshold, and the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is not greater than the preset acceleration threshold, it is determined that the driving state of the current vehicle is a stationary state or a motion state, including: if the frame difference foreground ratio of the image of the
  • the acceleration value is acceleration information collected by a gravity sensor.
  • the traveling state of the current vehicle is determined to be a stationary state based on the frame difference foreground ratio of the N-frame front image and/or the frame difference foreground dispersion and the acceleration information of the current vehicle.
  • the motion state includes: if the current running state of the vehicle is the motion state, if the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is not greater than a preset acceleration threshold, and the N-frame of the continuous N-frame If the foreground difference of the front image is not greater than the preset ratio threshold, it is determined that the current vehicle's driving state is changed to a stationary state; or, if the continuous N frame of the front image of the front of each frame, the current vehicle's acceleration value corresponding to each of the frontal images of the front of the vehicle is not greater than If the preset acceleration threshold and the frame difference foreground dispersion of the image of the front of the N-frame are not greater than the preset dispersion threshold, it is determined that the current driving state of the vehicle is changed to a stationary state; otherwise, the driving state of the current vehicle is determined to be a moving state.
  • the acceleration value of the current vehicle corresponding to the image of the front of each frame in the image of the front of the continuous N frame is greater than the preset acceleration threshold, it is determined that the driving state of the current vehicle is changed to the moving state. Otherwise, it is determined that the current driving state of the vehicle is a stationary state.
  • the acquisition unit includes: a tail detection module configured to perform tail detection on the image in front of the vehicle head to obtain one or more target objects in the image in front of the vehicle head; and a target tracking module for detecting Each target object performs target tracking to obtain a motion trajectory of each target object.
  • performing tail-end detection on the image in front of the vehicle head includes: performing vehicle tail detection on the image in front of the vehicle head by using different tail-end detection models in different detection time periods.
  • the pair is collected by a camera.
  • the front image of the front of the current frame of the current vehicle performs tail detection, and when the current running state of the vehicle is the moving state, the tail detection is turned off.
  • the tail-end detection module adopts an offline learning mode to train two tail-end models of two detection time periods (such as two detection time periods of day and night), and the tail-end model extracts the HOG feature of the tail sample. , trained with the Adaboost algorithm.
  • the daytime tail model obtained by training is used during the day, and at night, the switch is adaptively switched to the night tail model, which can realize different models at different time periods, ensuring the processing method of the front vehicle at night. Availability.
  • the tail detection module downsamples the image in front of the vehicle head with a certain scale factor to obtain a downsampled image (optionally, the downsampled pattern generates a thumbnail of the corresponding image to improve the processing speed of the system), for each drop
  • the sampled image is scanned by sliding window method, and the matching degree between each sliding window image and the trained tail model is calculated.
  • the window with higher matching degree is output as the target object, and one or more target objects in the image in front of the front of the vehicle are obtained, and the target image is obtained.
  • the specific scale factor can be a preset value.
  • the target tracking module uses Color-based target tracking algorithm ColorCSK to track multiple target objects simultaneously.
  • the ColorCSK algorithm uses color as the image feature and uses the frequency domain to achieve fast correlation matching.
  • the algorithm includes two parts: modeling and matching tracking. When modeling, the tail position of the target object is used as the training sample, and the frequency response is updated according to the expected response. Target model parameters; when matching tracking, the existing vehicle tail model is used to correlate the target object, and the highest position of the response value is found as the track position output of the target object of the current frame image.
  • the ColorCSK algorithm since the size of the image formed in the image is different due to the distance between the same target object and the current vehicle, the ColorCSK algorithm has scale adaptability, and can maintain multiple tracking models of different sizes to Achieve continuous and stable tracking of the same target object (eg, car), and obtain tracking trajectories of each target object.
  • the same target object eg, car
  • the target tracking is stopped.
  • the generating unit includes: a determining module, configured to determine, according to a motion trajectory of each target object, whether a previous vehicle of the current vehicle is moving; and a generating module, configured to: if a motion of the previous vehicle of the current vehicle occurs, Then, a reminder message is generated for reminding the current vehicle to start, wherein the previous vehicle is a vehicle that is traveling in the same lane as the current vehicle and is in front of the current vehicle head.
  • the determining module includes: a determining submodule, configured to determine a running direction of each target object based on a motion trajectory of each target object; and a first determining submodule configured to determine the target object by using a running direction of the target object and a length of the moving track Whether it is a candidate target that is moving, a candidate target queue is obtained; and a second determining sub-module is configured to determine whether each candidate target in the candidate target queue is a bypass target, wherein the bypass target is in a different lane from the current vehicle.
  • the target sub-module is configured to: if the candidate target is a bypass target, delete the bypass target from the candidate target queue to obtain the updated candidate target queue; and the third determining sub-module is configured to determine the updated candidate target queue Whether the candidate target in the current vehicle is the previous vehicle of the current vehicle; the fourth determining sub-module is configured to determine that the previous vehicle of the current vehicle occurs if the candidate target in the updated candidate target queue is the previous vehicle of the current vehicle motion.
  • the running state of the current vehicle is a stationary state
  • motion trajectories of one or more target objects are obtained by tail detection and target tracking
  • the running direction of each target object is determined based on the motion trajectory of each target object (turning or straightening)
  • the preset judgment condition corresponding to the running direction to determine whether the target object is a candidate target that is moving, and obtaining a candidate target queue. Determining whether each candidate target in the candidate target queue is in the same lane as the current vehicle. If the candidate target and the current vehicle are traveling in different lanes, the candidate target is a bypass target, and the candidate target is deleted from the candidate target queue to obtain an update.
  • the previous vehicle refers to the vehicle that is in the same lane as the current vehicle and is in front of the current vehicle head, if The previous vehicle of the current vehicle determines that the previous vehicle of the current vehicle is in motion, and generates reminder information for reminding the current vehicle to start.
  • whether the previous vehicle of the current vehicle is moving is determined based on the motion track of each target object, and if the previous vehicle of the current vehicle is moving, generating reminder information for reminding the current vehicle to start, to remind the current vehicle to promptly action.
  • the first determining sub-module includes: if the curvature of the fitting curve of the motion track of the target object is greater than the preset curvature threshold, determining that the running direction of the target object is a turning; if the curvature of the fitting curve of the moving track of the target object is not greater than the preset curvature
  • the threshold determines that the target object is running in a straight line.
  • the first determining sub-module includes: if the running direction of the target object is a turn, and the length of the running track of the target object is greater than the first preset length threshold, determining that the target object is a candidate for moving If the running direction of the target object is straight, and the length of the running track of the target object is greater than the second preset length threshold, the target object is determined to be a candidate target that is moving.
  • the second judging sub-module includes: detecting a lane line in the image in front of the head; if the lane line is detected, determining whether the candidate target is in the same lane as the current vehicle, and if the candidate target is in the same lane as the current vehicle, the candidate The target is not a bypass target; if the lane line is not detected, whether the candidate target satisfies the bypass vehicle trajectory based on the position of the candidate target in the image in front of the vehicle head and the motion trajectory of the candidate target, and if the candidate target satisfies the trajectory of the bypass vehicle, Then, the candidate target is determined as a bypass target; if the candidate target does not satisfy the bypass vehicle travel trajectory, it is determined that the candidate target is not a bypass target.
  • the third determining sub-module includes determining whether the candidate target is the previous vehicle by the initial position of all candidate targets in the updated candidate target queue and the relative position between the respective candidate targets. Specifically, the third determining sub-module determines, by the initial position of all the candidate targets in the updated candidate target queue and the relative position between the respective candidate targets, that the candidate target having the shortest midpoint from the lower edge of the front image of the front of the vehicle is the previous vehicle. .
  • the embodiment of the present application is a processing method of a front vehicle starting based on a gravity sensor and a monocular vision
  • the monocular vision refers to that the front scene information of the current vehicle can be acquired by a camera and intelligently analyzed.
  • the state machine model is adopted, and the driving state of the current vehicle can be accurately determined through multi-feature fusion.
  • different tail detection models are used to detect the tail image of each front of the front frame in different detection time periods, and one or more target objects in the front image of the front head are obtained.
  • the detection result of the lane line detection removes the bypass vehicle in the target object to filter the false alarm caused by the bypass vehicle starting to the current vehicle, and after removing the bypass vehicle, it is determined whether the remaining target object has the previous one of the current vehicle. The vehicle, if it is determined that there is a previous vehicle of the current vehicle, generates a reminder message to remind the current vehicle to operate in time.
  • the effective lane line if the effective lane line is not detected, it is determined whether the candidate target satisfies the bypass vehicle motion trajectory based on the position of the candidate target in the front image of the head and the motion trajectory of the candidate target, and if the candidate target satisfies the bypass path Vehicle trajectory, it is determined that the candidate target is a bypass vehicle Vehicle, delete the candidate target.
  • the driving state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle head and the acceleration information.
  • the front of the current vehicle is determined based on the motion trajectory of the target object in the image of the front of the vehicle head. Whether the car is activated to determine whether to generate a reminder message.
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the problem that the prior art cannot accurately perform the pre-car start reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the modules provided in this embodiment are the same as the methods used in the corresponding steps of the method embodiment, and the application scenarios may be the same.
  • the solution involved in the above module may not be limited to the content and scenario in the foregoing embodiment, and the foregoing module may be run on a computer terminal or a mobile terminal, and may be implemented by software or hardware.
  • a processing system for starting a vehicle is provided.
  • the processing system may include: a video sensor 71, a gravity sensor 73, and a processor 75.
  • the video sensor 71 is mounted on the front windshield of the current vehicle, and the camera is located on the same horizontal line as the rear view mirror of the current vehicle for collecting the front image of the front of the current vehicle.
  • the gravity sensor 73 is configured to collect acceleration information of the current vehicle.
  • the processor 75 is connected to the video sensor 71 and the gravity sensor 73 for determining the running state of the current vehicle based on the image feature and/or the acceleration information of the image of the front of the vehicle head, and acquiring the image of the front of the vehicle when the running state of the current vehicle is the stationary state.
  • the video sensor installed on the front windshield of the current vehicle collects the front image of the front of the current vehicle
  • the gravity sensor collects the acceleration information of the current vehicle based on the image features and/or acceleration information of the image of the front of the collected vehicle.
  • the acceleration value determines the running state of the current vehicle.
  • the current running state of the vehicle is the stationary state
  • the motion trajectory of the target object in the image of the front of the vehicle head is acquired, and whether the reminder for prompting the current vehicle is generated is determined according to the motion trajectory of the target object. information.
  • a visual sensor may be used to acquire image information of a scene in front of the front of the current vehicle, and in this scheme, a front vehicle start reminder is implemented based on the gravity sensor and the monocular vision. Increased the accuracy of the reminder.
  • the processing system may further include: a reminding unit 87 for outputting the reminder information in an audio and/or image manner.
  • the function of the video sensor in the above embodiment of the present application may be implemented by a camera, and the above reminding unit may be an alarm unit.
  • the camera 81 collects the front image of the front of the current vehicle
  • the gravity sensor 83 collects the acceleration information of the current vehicle
  • the processing unit 85 determines the current based on the image characteristics of the acquired front image of the front and/or the acceleration value of the acceleration information.
  • the running state of the vehicle is a stationary state
  • the driving trajectory of the target object in the image in front of the vehicle head is acquired, and whether the reminding information for prompting the activation of the current vehicle is generated according to the motion trajectory of the target object is generated, and the alert information is generated.
  • the driver or the image of the current vehicle can be alerted by the reminder unit 87 (such as the alarm unit) to output a voice or image and/or the driver can be alerted by the change of the light.
  • a front-end reminder system (including a video sensor, a gravity sensor, a processor, and a reminder unit) is mounted on the rear windshield of the current vehicle rear view mirror, wherein the video sensor (such as a camera) remains Horizontally, the camera can be in the same horizontal line as the rear view mirror of the current vehicle.
  • the video sensor such as a camera
  • FIG. 9 The installation diagram is shown in Figure 9.
  • the driving state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle and the acceleration information of the current vehicle.
  • the running state of the current vehicle is the stationary state
  • the current trajectory of the target object in the image of the front of the vehicle is used to determine the current state.
  • Whether the vehicle's front car is activated to determine whether it is necessary to generate a reminder message.
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the problem that the prior art cannot accurately perform the pre-car start reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the embodiment of the present application provides an application program, where the application is used to execute the processing method of the preceding vehicle start-up at runtime, and may include:
  • the driving state of the current vehicle is determined based on the image feature of the image of the front of the vehicle head acquired by the video sensor and/or the acceleration information collected by the gravity sensor, and the image of the front of the vehicle is acquired when the driving state of the current vehicle is the stationary state.
  • the motion track of the target object (including the preceding vehicle) determines whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the running state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle head and the acceleration information.
  • the running state of the current vehicle is a stationary state, it is determined whether the preceding vehicle of the current vehicle is activated based on the motion trajectory of the target object in the image in front of the vehicle head to determine whether it is necessary to generate the reminder information.
  • the program uses fewer parameters in the processing process, and the processing result is accurate, which solves the technical problem that the prior art cannot accurately perform the front vehicle starting reminding during the driving process, so that the current vehicle can be promptly reminded when the vehicle exits. In order to make the current vehicle move in time.
  • the embodiment of the present application provides a storage medium, where the storage medium is used to store an application, and the application is configured to execute the processing method of the preceding vehicle start-up at runtime, and may include:
  • the driving state of the current vehicle is determined based on the image feature of the image of the front of the vehicle head acquired by the video sensor and/or the acceleration information collected by the gravity sensor, and the image of the front of the vehicle is acquired when the driving state of the current vehicle is the stationary state.
  • the motion track of the target object (including the preceding vehicle) determines whether to generate reminder information for reminding the current vehicle to start according to the motion track of the target object.
  • the running state of the current vehicle is accurately determined by using the combination of the image information of the front of the vehicle head and the acceleration information.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the storage medium includes instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

一种前车起步的处理方法、装置和系统,其中,该前车起步的处理方法应用于前车起步的处理系统,处理系统包括:视频传感器和重力传感器,该处理方法包括:通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集当前车辆的加速度信息(S102);基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态(S104);在当前车辆的行驶状态为静止状态时,获取车头全方图像中目标对象的运动轨迹(S106);根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息(S108)。该方法解决了在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒当前车辆动作。

Description

前车起步处理方法、装置和系统
本申请要求于2015年10月23日提交中国专利局、申请号为201510700316.6、发明名称为“前车起步处理方法、装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及驾驶辅助系统领域,具体而言,涉及一种前车起步的处理方法、装置和系统。
背景技术
在交通路口,在交通信号灯由红灯亮转为绿灯亮,或交通信号灯由红色转为绿色时,前面的车辆已经驶出,而由于各种原因后面的车辆未及时跟随,可能会造成交通拥堵,严重的可能造成交通事故。现有技术中,一般通过基于前车和当前车辆的行驶状态来判断当前车辆是否需要起步,但是现有技术中通过重力传感器、CAN总线或者GPS确定当前车辆的行驶状态的结果误差较大,且现有技术通过分析前车车尾影像的尺寸变化同时辅以声波、激光等技术手段判断前车是否运动,该判断前车是否起步的技术手段需要的参数多、数据处理复杂、处理结果不准确。由上述分析可知,现有技术中无法准确判断当前车辆和前车的行驶状态,从而无法准确地进行前车起步提醒。
针对现有技术在行车过程中无法准确进行前车起步提醒的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种前车起步的处理方法、装置和系统,以至少解决现有技术在行车过程中无法准确进行前车起步提醒的技术问题。
根据本申请实施例的一个方面,提供了一种前车起步的处理方法,应用于前车起步的处理系统,处理系统包括:视频传感器和重力传感器,该处理方法包括:通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集当前车辆的加速度信息;基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态;在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹;根据目标对象的运动轨迹判断是否生 成用于提醒当前车辆启动的提醒信息。
根据本申请实施例的另一方面,还提供了一种前车起步的处理装置,应用于前车起步的处理系统,处理系统包括:视频传感器和重力传感器,该处理装置包括:采集单元,通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集当前车辆的加速度信息;确定单元,用于基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态;获取单元,用于在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹;生成单元,用于根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
根据本申请实施例的另一方面,又提供了一种前车起步的处理系统,该处理系统包括:视频传感器,安装在当前车辆的前挡风玻璃上,视频传感器与当前车辆的后视镜位于同一水平线,用于采集当前车辆的车头前方图像;重力传感器,用于采集当前车辆的加速度信息;处理器,与摄像机和重力传感器连接,用于基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹,并根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
根据本申请实施例的另一方面,又提供了一种应用程序,所述应用程序用于在运行时执行所述前车起步的处理方法。
根据本申请实施例的另一方面,又提供了一种存储介质,所述存储介质用于存储应用程序,所述应用程序用于在运行时执行所述前车起步的处理方法。
在本申请实施例中,基于通过视频传感器采集的车头前方图像的图像特征和/或通过重力传感器采集的加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象(包括前车)的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态。在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判 断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的技术问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种前车起步的处理方法的流程图;
图2是根据本申请实施例的一种可选的前车起步的处理方法的流程图;
图3是根据本申请实施例的另一种可选的前车起步的处理方法的流程图;
图4是根据本申请实施例的第三种可选的前车起步的处理方法的流程图;
图5是根据本申请实施例的第四种可选的前车起步的处理方法的流程图;
图6是根据本申请实施例的一种前车起步的处理装置的示意图;
图7是根据本申请实施例的一种前车起步的处理系统的示意图;
图8是根据本申请实施例的一种可选的前车起步的处理系统的示意图;
图9是根据本申请实施例的另一种可选的前车起步的处理系统的示意图。
具体实施方式
首先,在对本申请实施例进行描述的过程中出现的部分名词或术语适用于如下解释:
高级辅助驾驶系统(Advanced Driver Assistant System,ADAS):利用安装于车上的多类传感器,实时收集车内外环境数据,进行静、动态物体的辨识、侦测与追踪,从而让驾驶者在最快时间内察觉可能发生的危险,以提高行车安全。
单目视觉(Monocular Vision):用一台摄像机获取场景信息并进行智能分析。
线段检测器(Line Segment Detector):在线性时间内得到亚像素级准确度的直线段检测算法。
卡尔曼滤波(Kalman filter):利用线性系统状态方程,通过系统观测数据,对系统状态进行最优估计。
重力传感器(G-sensor):能够感知到加速度变化的传感器。
CAN总线:控制器局域网络(Controller Area Network,CAN)的简称,ISO国际标准化的串行通信协议,是国际上应用最广泛的现场总线之一。
车尾(Vehicle Tail):在图像中被检测算法和跟踪算法确认的车尾,用矩形框表示,在本文中与车辆、目标为同一个概念。
前车(Preceding Vehicle):在图像中被确认的位于当前车辆车头前方的唯一车辆,在本文中与前一车辆为同一个概念。
方向梯度直方图(Histogram of Oriented Gradient):一种在计算机视觉和图像处理中用来进行物体检测的特征描述子,通过计算和统计图像局部区域的梯度方向直方图来构成特征。
Adaboost:一种迭代算法,核心思想是针对同一个训练集训练不同的弱分类器,然后把这些弱分类器集合起来,构成一个强分类器。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本申请实施例,提供了一种前车起步的处理方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些 情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例的一种前车起步的处理方法的流程图,如图1所示,该前车起步的处理方法应用于前车起步的处理系统,处理系统包括:视频传感器和重力传感器,该处理方法可以包括如下步骤:
步骤S102,通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集当前车辆的加速度信息。
步骤S104,基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态。
其中,当前车辆的行驶状态可以为静止状态或者运动状态,静止状态是指当前车辆相对路面静止,比如等待红绿灯或堵车时;运动状态是指当前车辆相对路面运动,包括加速、减速、匀速运动。
步骤S106,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹。
其中,车头前方图像中目标对象可以为一个或者多个。
步骤S108,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
采用本申请实施例,基于通过视频传感器采集的车头前方图像的图像特征和/或通过重力传感器采集的加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象(包括前车)的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态。在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
上述实施例中,可以通过安装在当前车辆后视镜后方挡风玻璃上的摄像机实时捕获当前车辆的车头前方的连续画面,以获取当前车辆的车头前方图像。重力传感器实时获取当前车辆的加速度信息,该加速度信息包括当前车 辆在世界坐标系三个不同方向的加速度值,其中,上述的三个不同方向分别为车辆运行方向、垂直于车辆运行方向且与路面平行方向、垂直于路面方向。
通过获取车头前方图像和加速度信息,无需修改车辆线路,比只使用重力传感器确定车辆的行驶状态的方案也更加准确。
可选地,上述实施例中的摄像机可以为一台,通过一台摄像机实时捕获当前车辆的车头前方的连续画面得到获取当前车辆的车头前方图像,重力传感器获取当前车辆的加速度信息。基于该车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,若确定的当前车辆的行驶状态为静止状态时,获取车头前方图像中一个或多个目标对象的运动轨迹,基于该一个或多个目标对象的运动轨迹判断当前车辆的前车是否已经驶出,若当前车辆的前车已经驶出,且该车仍处于静止状态,则生成提醒信息。
下面以当前车辆行驶在交通路口为应用场景详细说明本申请实施例:
当前行驶车辆的后视镜后方的挡风玻璃上安装有摄像机,若该当前车辆行驶到交通路口时,该交通路口的交通灯为红色,则当前车辆停车,并启动该处理方法。在启动该算法之后,通过摄像机捕获当前车辆的前方场景的连续多帧画面,并通过重力传感器获取当前车辆在三个不同方向上的加速度信息,通过该车头前方图像的图像特征和/或加速度信息的加速度值确定当前车辆的行驶状态为静止状态,跟踪车头前方图像中目标对象的运动轨迹,若通过目标对象的运动轨迹判断出当前车辆的前一车辆驶出,则生成用于提醒当前车辆启动的提醒信息,并通过提示装置输出该提醒信息,以提醒当前车辆及时启动。
通过上述实施例,在交通信号灯转换颜色时,可以实现当前车辆的前一车辆驶出后,当前车辆能够自动、准确识别并起步跟随的效果。
根据本申请的上述实施例,基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态包括:在当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态;在当前车辆的行驶状态不为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态, 其中,车头前方图像的图像特征包括帧差前景比例和帧差前景离散度。
其中,车头前方图像的图像特征包括帧差前景比例和帧差前景离散度,帧差前景比例是指帧差前景点个数与图像像素点个数比例,用来衡量场景中与车辆存在相对运动的目标比例;帧差前景离散度是指前景块面积与图像像素点个数比例,用来衡量前景的离散程度,区分静止物体和运动目标造成的帧差。其中的前景块是对帧差前景点进行腐蚀、膨胀、连通域处理得到的矩形块。
可以理解的是,当前车辆与当前场景中的静止物体如路面、建筑物、栏杆等、以及当前场景中的运动物体如车辆、行人等均可以存在相对运动,因此,对相邻帧车头前方图像进行图像帧差算法后可得到帧差。而处于运动状态下的车辆与处于静止状态下的车辆对应的帧差特性不同。对处于运动状态下的车辆来说,帧差主要是由车辆与场景中的静止物体相对运动产生的,此时帧差前景比例较大,帧差前景离散度也较大。而对处于静止状态下的车辆来说,帧差主要由车辆与其它车辆或行人相对运动产生,因此帧差前景比例和前景离散度较小。因此,可以基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度、并结合当前车辆的加速度信息,确定当前车辆的行驶状态为静止状态或运动状态。
上述实施例中的重力传感器获取的加速度信息包括加速度值,能准确反映当前车辆是否处于加减速状态,在当前车辆启动或停止时,加速度值较大,在当期车辆匀速运动或静止时,加速度值较小。
在上述实施例中,在确定当前车辆的行驶状态时使用了车前场景的帧差前景离散度特征,可以去除旁道车辆对判断当前车辆行驶状态的干扰。
通过上述实施例,基于当前车辆的车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,从而实现对当前车辆行驶状态的准确确定。
具体地,在当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态包括:若连续N帧车头前方图像的帧差前景比例大于预设比例阈值、或连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值大于预设加速度阈值,则确定当前车辆的行驶状态为运动状态, 其中,加速度信息包括加速度值;若连续N帧车头前方图像的帧差前景比例不大于预设比例阈值、且连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值,则确定当前车辆的行驶状态为静止状态。
其中,加速度值为通过重力传感器采集的加速度信息。
进一步地,在当前车辆的行驶状态不为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态包括:在当前车辆的行驶状态为运动状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景比例不大于预设比例阈值,则确定当前车辆的行驶状态变更为静止状态;或,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景离散度不大于预设离散度阈值,则确定当前车辆的行驶状态变更为静止状态;否则,确定当前车辆的行驶状态为运动状态。在当前车辆的行驶状态为静止状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值大于预设加速度阈值,则确定当前车辆的行驶状态变更为运动状态,否则,确定当前车辆的行驶状态为静止状态。
可选地,预设加速度阈值可以用Tg表示,预设比例阈值可以用Tf表示,预设离散度阈值可以用Ts表示。
可选地,该实施例通过视频传感器(如上述的摄像机)实时获取当前车辆的车头前方的连续多帧画面,得到当前车辆的车头前方的图像信息,重力传感器获取当前车辆的加速度信息,基于该车头前方的图像信息和加速度信息确定当前车辆的行驶状态。
在该实施例中,可以通过状态机模型确定当前车辆的行驶状态,具体地,状态机模型包括三个状态:初始状态、运动状态和静止状态。初始状态是在系统启动时,由于采集的车头前方图像的帧数未达到系统设定的最小帧数阈值,则系统无法判断出当前车辆的行驶状态,此时将当前车辆的行驶状态设置为初始状态。运动状态是指当前车辆相对路面的运动速度的状态,包括加 速运动、减速运动以及匀速运动。静止状态是指当前车辆相对路面静止,比如等待红绿灯或堵车时。
上述的系统是指用于实现前车起步的处理方法的系统。
可选地,状态机模型如图2所示,该状态机模型的三个状态之间的切换可以利用当前车辆的车头前方图像的图像特征和重力传感器获取的加速度信息来实现。
在系统刚启动时,当前车辆的行驶状态为初始状态。当采集的车头前方图像的帧数达到系统设定的最小帧数阈值时,基于当前车辆的连续N帧车头前方图像的帧差前景比例和/或当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态(即图2中通过规则1或规则2确定当前车辆的行驶状态)。
具体地,如果在N帧当前车辆车头前方图像内,每帧车头前方图像对应的当前车辆的加速度值都超过预设加速度阈值Tg或每帧车头前方图像对应的当前车辆的帧差前景比例都大于预设比例阈值Tf,那么确定当前车辆的行驶状态为运动状态,即图2中的规则1。
如果在N帧当前车辆车头前方图像内,每帧车头前方图像对应的当前车辆的加速度值都小于预设加速度阈值Tg,且每帧车头前方图像对应的当前车辆的帧差前景比例都小于预设比例阈值Tf,那么确定当前车辆的行驶状态为静止状态,即图2中的规则2。
在确定当前车辆的行驶状态为静止状态或运动状态的情况下,可以按照图2所示的规则3和规则4对当前车辆的当前的行驶状态进行判断,具体地:
规则3:在当前车辆的行驶状态为运动状态的情况下,如果连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值小于预设加速度阈值Tg、且连续N帧车头前方图像的帧差前景比例小于预设比例阈值Tf;或,如果连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值小于预设加速度阈值Tg,且连续N帧车头前方图像的帧差前景离散度小于预设离散度阈值Ts,那么当前车辆的行驶状态切换到静止状态;否则,当前车辆维持运动状态。
规则4:在当前车辆的行驶状态为静止状态的情况下,如果连续N帧车头 前方图像中每帧车头前方图像对应的当前车辆的加速度值大于预设加速度阈值Tg,那么当前车辆的行驶状态切换至运动状态;否则当前车辆维持静止状态。
通过上述实施例,采用状态机模型可以准确判断当前车辆的行驶状态。
根据本申请的上述实施例,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹包括:对车头前方图像进行车尾检测,得到车头前方图像中的一个或多个目标对象;对检测到的各个目标对象进行目标跟踪,得到各个目标对象的运动轨迹。
具体地,对车头前方图像进行车尾检测包括:在不同的检测时间段采用不同的车尾检测模型对车头前方图像进行车尾检测。
可选地,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息包括:基于各个目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动;若当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息,其中,前一车辆为与当前车辆行驶在同一车道且在当前车辆车头前方的车辆。
在上述实施例中,在当前车辆的行驶状态为静止状态时,通过车尾检测和目标跟踪得到一个或多个目标对象的运动轨迹,基于该一个或多个目标对象的运动轨迹判断当前车辆的前车是否已经驶出,若当前车辆的前车已经驶出,且该当前车辆仍处于静止状态,则生成提醒信息,以提醒当前车辆及时动作。
下面结合图3详述本申请上述实施例,如图3所示,该实施例可以通过如下步骤实现:
步骤S301:通过视频传感器采集当前车辆的车头前方图像。
步骤S302:通过重力传感器采集当前车辆的加速度信息。
具体地,可以通过一台摄像机实时捕获当前车辆的车头前方的连续画面得到获取当前车辆的车头前方图像,重力传感器实时获取当前车辆在三个不同方向上的加速度信息。
步骤S303:基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态是否为静止状态。
若确定当前车辆的行驶状态为静止状态,则执行步骤S304;若确定当前车辆的行驶状态不为静止状态,则结束提醒算法。
步骤S304:对车头前方图像进行车尾检测,得到车头前方图像中的一个或多个目标对象。
可选地,在当前车辆行驶状态为静止状态时,对通过一台摄像机采集的当前车辆的当前帧的车头前方图像进行车尾检测,在当前车辆行驶状态为运动状态时,关闭车尾检测。
在该实施例中,车尾检测采用离线学习方式,分别训练两个检测时间段(如白天和晚上两个检测时间段)的两个车尾模型,车尾模型通过提取车尾样本HOG特征,用Adaboost算法训练得到。在进行车尾检测时,白天采用训练得到的白天车尾模型,到了夜间,自适应切换为夜间车尾模型,可以实现在不同时间段调用不同模型,保证了前车起步的处理方法在夜间的可用性。
可选地,在进行车尾检测时,以特定尺度因子对车头前方图像进行降采样,得到降采样图像(可选地,该降采样图样生成对应图像的缩略图,提高系统处理速度),对每个降采样图像通过滑动窗口方式扫描,计算各滑动窗口图像与训练得到的车尾模型的匹配度,匹配度较高的窗口作为目标对象输出,得到车头前方图像中一个或多个目标对象,并对得到的一个或多个目标对象进行目标跟踪。其中,特定尺度因子可以预先设置。
步骤S305:对检测到的各个目标对象进行目标跟踪,得到各个目标对象的运动轨迹。
其中,目标跟踪采用基于颜色的目标跟踪算法ColorCSK,可对多个目标对象同时进行跟踪。ColorCSK算法以颜色为图像特征,利用频率域实现快速相关匹配,该算法包括建模和匹配跟踪两部分:建模时,将目标对象的车尾位置作为训练样本,根据期望响应在频率域中更新目标模型参数;匹配跟踪时,利用已有的车尾模型对目标对象进行相关匹配,找到响应值最高位置作为当前帧图像的目标对象的轨迹位置输出。
可选地,在目标跟踪过程中,由于同一目标对象与当前车辆的距离不同在图像中形成的像的大小也会不同,ColorCSK算法具有尺度自适应性,可维持不同尺寸多个跟踪模型,以实现对同一目标对象(如,车)的持续稳定跟 踪,得到各个目标对象的跟踪轨迹。
在当前车辆行驶状态为运动状态时,停止目标跟踪。
步骤S306:检测车头前方图像中的车道线。
通过检测是否有车道线以及车道线在图像中的位置。
步骤S307:基于车道线的检测结果和目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动。
基于目标对象与车道线的相对位置判断目标对象是否为旁道目标,去除旁道车辆起步造成的对当前车辆的误提醒。
若当前车辆的前一车辆发生运动,则执行步骤S308;若当前车辆的前一车辆未发生运动,则结束提醒流程。
步骤S308:生成用于提醒当前车辆启动的提醒信息。
根据本申请的上述实施例,基于各个目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动可以包括:基于各个目标对象的运动轨迹确定各个目标对象的运行方向;利用目标对象的运行方向和运动轨迹的长度判断目标对象是否为正在运动的候选目标,得到候选目标队列;判断候选目标队列中的各个候选目标是否为旁道目标,其中,旁道目标为与当前车辆行驶在不同车道的目标;若候选目标为旁道目标,则从候选目标队列中删除旁道目标,得到更新后的候选目标队列;判断更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆;若更新后的候选目标队列中的候选目标为当前车辆的前一车辆,则判断出当前车辆的前一车辆发生运动。
具体地,在当前车辆的行驶状态为静止状态时,通过车尾检测和目标跟踪得到一个或多个目标对象的运动轨迹,基于各个目标对象的运动轨迹确定各个目标对象的运行方向(转弯或直行),并利用目标对象的运动轨迹和与运行方向对应的预设判断条件确定目标对象是否为正在运动的候选目标,得到候选目标队列。判断候选目标队列中的各个候选目标是否与当前车辆行驶在同一车道,若候选目标与当前车辆行驶在不同车道,则该候选目标为旁道目标,从候选目标队列中删除该候选目标,得到更新后的候选目标队列,并判断更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆,前一车辆是指与当前车辆行驶在同一车道且在当前车辆车头前方的车辆,若为当前 车辆的前一车辆,判断出当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息。
通过上述实施例,基于各个目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动,若当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息,以提醒当前车辆及时动作。
具体地,基于各个目标对象的运动轨迹确定各个目标对象的运行方向包括:若目标对象的运动轨迹的拟合曲线曲率大于预设曲率阈值,则确定目标对象的运行方向为转弯;若目标对象的运动轨迹的拟合曲线曲率不大于预设曲率阈值,则确定目标对象的运行方向为直行。
根据本申请的上述实施例,利用目标对象的运行方向和运动轨迹的长度判断目标对象是否为正在运动的候选目标包括:若目标对象的运行方向为转弯,且目标对象的运行轨迹的长度大于第一预设长度阈值,则确定目标对象为正在运动的候选目标;若目标对象的运行方向为直行,且目标对象的运行轨迹的长度大于第二预设长度阈值,则确定目标对象为正在运动的候选目标。
可选地,预设曲率阈值可以用T表示,第一预设长度阈值可以用L1表示,第二预设长度阈值可以用L2表示。
可选地,对于通过基于对每一帧图像进行车尾检测和目标跟踪得到的一个或多个目标对象的检测跟踪结果更新,更新运动轨迹。首先通过各个目标对象的运动轨迹确定各个目标对象的运行方向(直行或转弯)。如果目标对象的运动轨迹拟合曲线曲率大于预设曲率阈值T,则确定该目标对象的运行方向为转弯;如果目标对象的运动轨迹拟合曲线曲率小于T,则确定该目标对象的运行方向为直行。对于运动方向为转弯的目标对象,如果对应的运动轨迹长度大于第一预设长度阈值L1(单位为像素),则确定该目标对象为正在运动的候选目标,将该目标对象加入候选目标队列;对于运动方向为直行的目标对象,如果对应的运动轨迹长度大于第二预设长度阈值L2(单位为像素),则确定该目标对象为正在运动的候选目标,将该目标对象加入候选目标队列。如果轨迹较短,没有满足条件的候选目标,则直接跳出,进入下一帧运动轨迹的更新。
通过上述实施例,可以准确判断出当前车辆的前一车辆是否发生运动, 并在前一车辆发生运动的情况下,生成用于提醒当前车辆启动的提醒信息,以提醒当前车辆及时动作。
下面结合图4详述本申请上述实施例,如图4所示该实施例可以通过如下步骤实现:
步骤S401:获取目标对象的运动轨迹。
步骤S402:基于各个目标对象的运动轨迹确定各个目标对象的运行方向。
若目标对象的运行方向为转弯,则执行步骤S403;若目标对象的运行方向为直行,则执行步骤S404。
具体地,若目标对象的运动轨迹的拟合曲线曲率大于预设曲率阈值,则确定目标对象的运行方向为转弯;若目标对象的运动轨迹的拟合曲线曲率不大于预设曲率阈值,则确定目标对象的运行方向为直行。
步骤S403:判断目标对象的运行轨迹的长度是否大于第一预设长度阈值。
若目标对象的运行轨迹的长度大于第一预设长度阈值,则确定目标对象为正在运动的候选目标。
步骤S404:判断目标对象的运行轨迹的长度是否大于第二预设长度阈值。
若目标对象的运行轨迹的长度大于第二预设长度阈值,则确定目标对象为正在运动的候选目标。
步骤S405:将目标对象加入候选目标队列。
具体地,将确定为正在运动的候选目标的目标对象加入候选目标队列。
步骤S406:判断候选目标队列中的候选目标是否为旁道目标。
其中,旁道目标为与当前车辆行驶在不同车道的目标。
若候选目标为旁道目标,则执行步骤S409:从候选目标队列中删除旁道目标,得到更新后的候选目标队列;若候选目标不为旁道目标,执行步骤S407。
步骤S407:判断候选目标是否为当前车辆的前一车辆。
判断更新后的候选目标队列中的候选目标是否为前车,若更新后的候选目标队列中的候选目标为当前车辆的前一车辆,则判断出当前车辆的前一车辆发生运动,则执行步骤S408。
步骤S408:报警。
根据本申请的上述实施例,判断候选目标队列中的各个候选目标是否为 旁道目标包括:检测车头前方图像中的车道线;若检测到车道线,则判断候选目标是否与当前车辆行驶在同一车道内,若候选目标与当前车辆行驶在同一车道内,则候选目标不是旁道目标;若未检测到车道线,基于候选目标在车头前方图像中的位置和候选目标的运动轨迹判断候选目标是否满足旁道车辆行驶轨迹,若候选目标满足旁道车辆行驶轨迹,则确定候选目标为旁道目标;若候选目标不满足旁道车辆行驶轨迹,则确定候选目标不是旁道目标。
可选地,采用车道线检测判断候选目标是否与当前车辆在同一车道内,以过滤旁道车辆起步造成的误触发,车道线检测模块包括线段检测、线段合并、车道线筛选和车道线跟踪四个步骤,其中线段检测采用LSD算法。
通过上述实施例,检测车头前方图像中的车道线,并在检测到有效车道线和未检测到有效车道线两种情况下,判断候选目标队列中的各个候选目标是否为旁道目标,从候选目标队列中删除旁道目标,得到更新后的候选目标队列,以过滤旁道车辆起步时造成对当前车辆的误提醒。
下面结合图5详述本申请上述实施例的旁道车辆的判断流程,如图5所示旁道车辆的判断流程通过如下步骤实现:
步骤S501:检测车头前方图像中的车道线是否有效。
若检测到有效车道线,则执行步骤S502;若未检测到有效车道线,则执行步骤S503。
步骤S502:判断候选目标是否在有效车道线内。
具体地,判断候选目标是否与当前车辆行驶在同一车道内。若候选目标与当前车辆行驶在同一车道内,则候选目标不是旁道目标,则执行步骤S504:确定候选目标为本车道车辆;若候选目标与当前车辆行驶不在同一车道内,则执行步骤S505:确定候选目标为旁道车辆。
其中,本车道车辆是指与当前车辆在同一车道内的候选目标,旁道车辆是指与当前车辆不在同一车道内的候选目标。
步骤S503:判断候选目标是否满足旁道车辆运动轨迹。
具体地,基于候选目标在车头前方图像中的位置和候选目标的运动轨迹判断候选目标是否满足旁道车辆运动轨迹。
若候选目标满足旁道车辆运动轨迹,则执行步骤S505;若候选目标不满 足旁道车辆运动轨迹,则执行步骤S504。
根据本申请的上述实施例,判断更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆包括:通过更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置判断候选目标是否为前一车辆。
具体地,通过更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置判断候选目标是否为前一车辆可以包括:通过更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置,确定距离车头前方图像的下边缘中点最短的候选目标为前一车辆。
具体地,在得到更新后的候选目标队列之后,获取更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置,并通过分析确定距离车头前方图像的下边缘中点最短的候选目标,该候选目标为当前车辆的前一车辆,则判断出当前车辆的前一车辆发生动作,生成用于提醒当前车辆启动的提醒信息,以提醒当前车辆及时动作。若通过分析未得到有效候选目标,则候选目标队列中不存在发生运动的前一车辆,则进入下一帧运动轨迹的更新。
可选地,可以通过报警装置输出语音或图像提醒当前车辆的驾驶员和/或通过光线的变化提醒驾驶员,例如夜间变化车内的氛围灯等。
通过上述实施例,可以准确判断出更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆,并在存在候选目标为当前车辆的前一车辆时,判断出当前车辆的前一车辆发生动作,生成用于提醒当前车辆启动的提醒信息,以提醒当前车辆及时动作。
具体地,本申请实施例是基于重力传感器和单目视觉的前车起步的处理方法,其中的单目视觉是指可以通过一台摄像机获取当前车辆的车头前方场景信息并进行智能分析。基于通过一台摄像机获取的当前车辆的车头前方图像的图像特征和重力传感器获取的加速度信息,采用状态机模型,通过多特征融合可以准确判断当前车辆的行驶状态。在判断出当前车辆的行驶状态为静止状态时,在不同的检测时间段采用不同的车尾检测模型对每一帧车头前方图像进行车尾检测,得到车头前方图像中的一个或多个目标对象,并跟踪各个目标对象得到各个目标对象的运动轨迹。通过车道线检测的检测结果去 除目标对象中的旁道车辆,以过滤旁道车辆起步对当前车辆造成的误提醒,在去除旁道车辆之后,判断剩余的目标对象中是否存在当前车辆的前一车辆,若判断出存在当前车辆的前一车辆,则生成提醒信息,以提醒当前车辆及时动作。
在上述的车道线检测中,若检测不到有效车道线,则基于候选目标在车头前方图像中的位置和候选目标的运动轨迹判断候选目标是否满足旁道车辆运动轨迹,若候选目标满足旁道车辆运动轨迹,则确定该候选目标为旁道车辆,删除该候选目标。
通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
实施例2
根据本申请实施例,还提供了一种用于前车起步的处理方法的处理装置,应用于前车起步的处理系统,处理系统包括:视频传感器和重力传感器,如图6所示,该装置包括:采集模块20、确定模块40、获取模块60以及生成模块80。
其中,采集单元20,用于通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集当前车辆的加速度信息。
确定单元40,用于基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态。
其中,当前车辆的行驶状态可以为静止状态或者运动状态,静止状态是指当前车辆相对路面静止,比如等待红绿灯或堵车时;运动状态是指当前车辆相对路面运动,包括加速、减速、匀速运动。
获取单元60,用于在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹。
其中,车头前方图像中目标对象可以为一个或者多个。
生成单元80,用于根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
采用本申请实施例,基于通过视频传感器采集的车头前方图像的图像特征和/或通过重力传感器采集的加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象(包括前车)的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
上述实施例中,可以通过安装在当前车辆后视镜后方挡风玻璃上的摄像机实时捕获当前车辆的车头前方的连续画面,以获取当前车辆的车头前方图像,重力传感器实时获取当前车辆的加速度信息,该加速度信息包括当前车 辆在世界坐标系三个不同方向的加速度值。所述的三个不同方向分别为车辆运行方向、垂直于车辆运行方向且与路面平行方向、垂直于路面方向。
通过获取车头前方图像和加速度信息,无需修改车辆线路,比只使用重力传感器确定车辆的行驶状态的方案也更加准确。
可选地,上述实施例中的摄像机可以为一台,通过一台摄像机实时捕获当前车辆的车头前方的连续多帧画面得到获取当前车辆的车头前方图像,重力传感器获取当前车辆的加速度信息,基于该车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,若确定的当前车辆的行驶状态为静止状态时,获取车头前方图像中一个或多个目标对象的运动轨迹,基于该一个或多个目标对象的运动轨迹判断当前车辆的前车是否已经驶出,若当前车辆的前车已经驶出,且该车仍处于静止状态,则生成提醒信息。
根据本申请的上述实施例,确定单元包括:第一确定模块,用于在当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态;第二确定模块,用于在当前车辆的行驶状态不为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态,
其中,车头前方图像的图像特征包括帧差前景比例和帧差前景离散度,帧差前景比例是指帧差前景点个数与图像像素点个数比例;帧差前景离散度是指前景块面积与图像像素点个数比例,用来衡量前景的离散程度。其中的前景块是对帧差前景点进行腐蚀、膨胀、连通域处理得到的矩形块。
上述实施例中的重力传感器获取的加速度信息包括加速度值,能准确反映当前车辆是否处于加减速状态,在当前车辆启动或停止时,加速度值较大,在当期车辆匀速运动或静止时,加速度值较小。
在上述实施例中,在确定当前车辆的行驶状态时使用了车前场景的帧差前景离散度的参数,可以去除旁道车辆对判断当前车辆行驶状态的干扰。
通过上述实施例,基于当前车辆的车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,从而实现对当前车辆行驶状态的准确确定。
具体地,在当前车辆的行驶状态为初始状态时,基于连续N帧车头前方 图像的帧差前景比例和/或当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态包括:若连续N帧车头前方图像的帧差前景比例大于预设比例阈值、或连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值大于预设加速度阈值,则确定当前车辆的行驶状态为运动状态,其中,加速度信息包括加速度值;若连续N帧车头前方图像的帧差前景比例不大于预设比例阈值、且连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值,则确定当前车辆的行驶状态为静止状态。
其中,加速度值为通过重力传感器采集的加速度信息。
进一步地,在当前车辆的行驶状态不为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及当前车辆的加速度信息确定当前车辆的行驶状态为静止状态或运动状态包括:在当前车辆的行驶状态为运动状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景比例不大于预设比例阈值,则确定当前车辆的行驶状态变更为静止状态;或,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景离散度不大于预设离散度阈值,则确定当前车辆的行驶状态变更为静止状态;否则,确定当前车辆的行驶状态为运动状态。在当前车辆的行驶状态为静止状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的当前车辆的加速度值大于预设加速度阈值,则确定当前车辆的行驶状态变更为运动状态,否则,确定当前车辆的行驶状态为静止状态。
根据本申请的上述实施例,获取单元包括:车尾检测模块,用于对车头前方图像进行车尾检测,得到车头前方图像中的一个或多个目标对象;目标跟踪模块,用于对检测到的各个目标对象进行目标跟踪,得到各个目标对象的运动轨迹。
具体地,对车头前方图像进行车尾检测包括:在不同的检测时间段采用不同的车尾检测模型对车头前方图像进行车尾检测。
可选地,在当前车辆行驶状态为静止状态时,对通过一台摄像机采集的 当前车辆的当前帧的车头前方图像进行车尾检测,在当前车辆行驶状态为运动状态时,关闭车尾检测。
在该实施例中,车尾检测模块采用离线学习方式,分别训练两个检测时间段(如白天和晚上两个检测时间段)的两个车尾模型,车尾模型通过提取车尾样本HOG特征,用Adaboost算法训练得到。在进行车尾检测时,白天采用训练得到的白天车尾模型,到了夜间,自适应切换为夜间车尾模型,可以实现在不同时间段调用不同模型,保证了前车起步的处理方法在夜间的可用性。
可选地,车尾检测模块以特定尺度因子对车头前方图像进行降采样,得到降采样图像(可选地,该降采样图样生成对应图像的缩略图,提高系统处理速度),对每个降采样图像通过滑动窗口方式扫描,计算各滑动窗口图像与训练得到的车尾模型的匹配度,匹配度较高的窗口作为目标对象输出,得到车头前方图像中一个或多个目标对象,并对得到的一个或多个目标对象进行目标跟踪。其中,特定尺度因子可以预先设置的值。
目标跟踪模块采用基于颜色的目标跟踪算法ColorCSK,可对多个目标对象同时进行跟踪。ColorCSK算法以颜色为图像特征,利用频率域实现快速相关匹配,该算法包括建模和匹配跟踪两部分:建模时,将目标对象的车尾位置作为训练样本,根据期望响应在频率域中更新目标模型参数;匹配跟踪时,利用已有的车尾模型对目标对象进行相关匹配,找到响应值最高位置作为当前帧图像的目标对象的轨迹位置输出。
可选地,在目标跟踪过程中,由于同一目标对象与当前车辆的距离不同在图像中形成的像的大小也会不同,ColorCSK算法具有尺度自适应性,可维持不同尺寸多个跟踪模型,以实现对同一目标对象(如,车)的持续稳定跟踪,得到各个目标对象的跟踪轨迹。
在当前车辆行驶状态为运动状态时,停止目标跟踪。
根据本申请的上述实施例,生成单元包括:判断模块,用于基于各个目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动;生成模块,用于若当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息,其中,前一车辆为与当前车辆行驶在同一车道且在当前车辆车头前方的车辆。
具体地,判断模块包括:确定子模块,用于基于各个目标对象的运动轨迹确定各个目标对象的运行方向;第一判断子模块,用于利用目标对象的运行方向和运动轨迹的长度判断目标对象是否为正在运动的候选目标,得到候选目标队列;第二判断子模块,用于判断候选目标队列中的各个候选目标是否为旁道目标,其中,旁道目标为与当前车辆行驶在不同车道的目标;删除子模块,用于若候选目标为旁道目标,则从候选目标队列中删除旁道目标,得到更新后的候选目标队列;第三判断子模块,用于判断更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆;第四判断子模块,用于若更新后的候选目标队列中的候选目标为当前车辆的前一车辆,则判断出当前车辆的前一车辆发生运动。
具体地,在当前车辆的行驶状态为静止状态时,通过车尾检测和目标跟踪得到一个或多个目标对象的运动轨迹,基于各个目标对象的运动轨迹确定各个目标对象的运行方向(转弯或直行),并利用目标对象的运动轨迹和与运行方向对应的预设判断条件确定目标对象是否为正在运动的候选目标,得到候选目标队列。判断候选目标队列中的各个候选目标是否与当前车辆行驶在同一车道,若候选目标与当前车辆行驶在不同车道,则该候选目标为旁道目标,从候选目标队列中删除该候选目标,得到更新后的候选目标队列,并判断更新后的候选目标队列中的候选目标是否为当前车辆的前一车辆,前一车辆是指与当前车辆行驶在同一车道且在当前车辆车头前方的车辆,若为当前车辆的前一车辆,判断出当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息。
通过上述实施例,基于各个目标对象的运动轨迹判断当前车辆的前一车辆是否发生运动,若当前车辆的前一车辆发生运动,则生成用于提醒当前车辆启动的提醒信息,以提醒当前车辆及时动作。
第一确定子模块包括:若目标对象的运动轨迹的拟合曲线曲率大于预设曲率阈值,则确定目标对象的运行方向为转弯;若目标对象的运动轨迹的拟合曲线曲率不大于预设曲率阈值,则确定目标对象的运行方向为直行。
第一判断子模块包括:若目标对象的运行方向为转弯,且目标对象的运行轨迹的长度大于第一预设长度阈值,则确定目标对象为正在运动的候选目 标;若目标对象的运行方向为直行,且目标对象的运行轨迹的长度大于第二预设长度阈值,则确定目标对象为正在运动的候选目标。
第二判断子模块包括:检测车头前方图像中的车道线;若检测到车道线,则判断候选目标是否与当前车辆行驶在同一车道内,若候选目标与当前车辆行驶在同一车道内,则候选目标不是旁道目标;若未检测到车道线,基于候选目标在车头前方图像中的位置和候选目标的运动轨迹判断候选目标是否满足旁道车辆行驶轨迹,若候选目标满足旁道车辆行驶轨迹,则确定候选目标为旁道目标;若候选目标不满足旁道车辆行驶轨迹,则确定候选目标不是旁道目标。
第三判断子模块包括:通过更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置判断候选目标是否为前一车辆。具体地,第三判断子模块通过更新后的候选目标队列中的所有候选目标的初始位置和各个候选目标之间的相对位置确定距离车头前方图像的下边缘中点最短的候选目标为前一车辆。
具体地,本申请实施例是基于重力传感器和单目视觉的前车起步的处理方法,其中的单目视觉是指可以通过一台摄像机获取当前车辆的车头前方场景信息并进行智能分析。基于通过一台摄像机获取的当前车辆的车头前方图像的图像特征和重力传感器获取的加速度信息,采用状态机模型,通过多特征融合可以准确判断当前车辆的行驶状态。在判断出当前车辆的行驶状态为静止状态时,在不同的检测时间段采用不同的车尾检测模型对每一帧车头前方图像进行车尾检测,得到车头前方图像中的一个或多个目标对象,并跟踪各个目标对象得到各个目标对象的运动轨迹。通过车道线检测的检测结果去除目标对象中的旁道车辆,以过滤旁道车辆起步对当前车辆造成的误提醒,在去除旁道车辆之后,判断剩余的目标对象中是否存在当前车辆的前一车辆,若判断出存在当前车辆的前一车辆,则生成提醒信息,以提醒当前车辆及时动作。
在上述的车道线检测中,若检测不到有效车道线,则基于候选目标在车头前方图像中的位置和候选目标的运动轨迹判断候选目标是否满足旁道车辆运动轨迹,若候选目标满足旁道车辆运动轨迹,则确定该候选目标为旁道车 辆,删除该候选目标。
通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
本实施例中所提供的各个模块与方法实施例对应步骤所提供的使用方法相同、应用场景也可以相同。当然,需要注意的是,上述模块涉及的方案可以不限于上述实施例中的内容和场景,且上述模块可以运行在计算机终端或移动终端,可以通过软件或硬件实现。
实施例3
根据本申请实施例,又提供了一种前车起步的处理系统,如图7所示,该处理系统可以包括:视频传感器71、重力传感器73以及处理器75。
其中,视频传感器71,安装在当前车辆的前挡风玻璃上,摄像机与当前车辆的后视镜位于同一水平线,用于采集当前车辆的车头前方图像。
重力传感器73,用于采集当前车辆的加速度信息。
处理器75,与视频传感器71和重力传感器73连接,用于基于车头前方图像的图像特征和/或加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹,并根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
采用本申请实施例,安装在当前车辆的前挡风玻璃上的视频传感器采集当前车辆的车头前方图像,重力传感器采集当前车辆的加速度信息,基于采集的车头前方图像的图像特征和/或加速度信息的加速度值,确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。
在上述实施例中,可以使用一个视觉传感器获取当前车辆的车头前方场景的图像信息,在该方案中基于重力传感器和单目视觉实现前车起步提醒, 增加了提醒的准确性。
如图8所示根据本申请的上述实施例,处理系统还可以包括:提醒单元87,用于将提醒信息以声音和/或图像方式输出。
本申请上述实施例中的视频传感器的功能可以通过摄像机实现,上述的提醒单元可以为报警单元。
在本申请实施例中,摄像机81采集当前车辆的车头前方图像,重力传感器83采集当前车辆的加速度信息,处理单元85基于采集的车头前方图像的图像特征和/或加速度信息的加速度值,确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息,若生成提醒信息,则可以通过提醒单元87(如报警单元)输出语音或图像提醒当前车辆的驾驶员以及/或通过光线的变化提醒驾驶员。
可选地,前车起步的提醒系统(包括视频传感器、重力传感器、处理器以及提醒单元),该提醒系统安装在当前车辆后视镜后方挡风玻璃上,其中的视频传感器(如摄像机)保持水平,该摄像机可以与当前车辆的后视镜位于同一水平线,安装示意图如图9所示。
通过上述实施例,采用当前车辆的车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
本申请实施例提供了一种应用程序,所述应用程序用于在运行时执行所述前车起步的处理方法,可以包括:
通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集所述当前车辆的加速度信息;
基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态;
在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目 标对象的运动轨迹;
根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息。
应用本申请实施例,基于通过视频传感器采集的车头前方图像的图像特征和/或通过重力传感器采集的加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象(包括前车)的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态。在当前车辆的行驶状态为静止状态时,基于车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的技术问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
本申请实施例提供了一种存储介质,所述存储介质用于存储应用程序,所述应用程序用于在运行时执行所述前车起步的处理方法,可以包括:
通过视频传感器采集当前车辆的车头前方图像,以及通过重力传感器采集所述当前车辆的加速度信息;
基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态;
在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目标对象的运动轨迹;
根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息。
应用本申请实施例,基于通过视频传感器采集的车头前方图像的图像特征和/或通过重力传感器采集的加速度信息确定当前车辆的行驶状态,在当前车辆的行驶状态为静止状态时,获取车头前方图像中目标对象(包括前车)的运动轨迹,根据目标对象的运动轨迹判断是否生成用于提醒当前车辆启动的提醒信息。通过上述实施例,采用车头前方图像信息和加速度信息的结合准确判断当前车辆的行驶状态。在当前车辆的行驶状态为静止状态时,基于 车头前方图像中的目标对象的运动轨迹判断当前车辆的前车是否启动,以判断是否需要生成提醒信息。该方案在处理过程中使用的参数少,处理结果准确,解决了现有技术在行车过程中无法准确进行前车起步提醒的技术问题,使得在当前车辆的前一车辆驶出时,可以及时提醒,以使当前车辆及时动作。
对于前车起步的处理系统、应用程序以及存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个 存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (22)

  1. 一种前车起步的处理方法,其特征在于,应用于前车起步的处理系统,所述处理系统包括:视频传感器和重力传感器,所述处理方法包括:
    通过所述视频传感器采集当前车辆的车头前方图像,以及通过所述重力传感器采集所述当前车辆的加速度信息;
    基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态;
    在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目标对象的运动轨迹;
    根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息。
  2. 根据权利要求1所述的处理方法,其特征在于,基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态包括:
    在所述当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或所述当前车辆的加速度信息确定所述当前车辆的行驶状态为所述静止状态或运动状态;
    在所述当前车辆的行驶状态不为所述初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及所述当前车辆的加速度信息确定所述当前车辆的行驶状态为所述静止状态或所述运动状态,
    其中,所述车头前方图像的图像特征包括所述帧差前景比例和所述帧差前景离散度。
  3. 根据权利要求2所述的处理方法,其特征在于,在所述当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或所述当前车辆的加速度信息确定所述当前车辆的行驶状态为静止状态或运动状态包括:
    若连续N帧车头前方图像的帧差前景比例大于预设比例阈值、或连续N帧车头前方图像中每帧车头前方图像对应的所述当前车辆的加速度值大于预设加速度阈值,则确定所述当前车辆的行驶状态为所述运动状态,其中,所述加速度信息包括所述加速度值;
    若连续N帧车头前方图像的帧差前景比例不大于所述预设比例阈值、且连续N帧车头前方图像中每帧车头前方图像对应的所述当前车辆的加速度值不大于所述预设加速度阈值,则确定所述当前车辆的行驶状态为所述静止状态。
  4. 根据权利要求2所述的处理方法,其特征在于,在所述当前车辆的行驶状态不为所述初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及所述当前车辆的加速度信息确定所述当前车辆的行驶状态为所述静止状态或所述运动状态包括:
    在所述当前车辆的行驶状态为所述运动状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的所述当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景比例不大于预设比例阈值,则确定所述当前车辆的行驶状态变更为所述静止状态;或,若连续N帧车头前方图像中每帧车头前方图像对应的所述当前车辆的加速度值不大于预设加速度阈值、且连续N帧车头前方图像的帧差前景离散度不大于预设离散度阈值,则确定所述当前车辆的行驶状态变更为所述静止状态;
    在所述当前车辆的行驶状态为所述静止状态的情况下,若连续N帧车头前方图像中每帧车头前方图像对应的所述当前车辆的加速度值大于所述预设加速度阈值,则确定所述当前车辆的行驶状态变更为所述运动状态,否则,确定所述当前车辆的行驶状态为所述静止状态。
  5. 根据权利要求1所述的处理方法,其特征在于,在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目标对象的运动轨迹包括:
    对所述车头前方图像进行车尾检测,得到所述车头前方图像中的一个或多个目标对象;
    对检测到的各个所述目标对象进行目标跟踪,得到各个所述目标对象的运动轨迹。
  6. 根据权利要求1所述的处理方法,其特征在于,根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息包括:
    基于各个所述目标对象的运动轨迹判断所述当前车辆的前一车辆是否发生运动;
    若所述当前车辆的前一车辆发生运动,则生成所述用于提醒所述当前车辆启动的提醒信息。
  7. 根据权利要求6所述的处理方法,其特征在于,基于各个所述目标对象的运动轨迹判断所述当前车辆的前一车辆是否发生运动包括:
    基于各个所述目标对象的运动轨迹确定各个所述目标对象的运行方向;
    利用所述目标对象的运行方向和运动轨迹的长度判断所述目标对象是否为正在运动的候选目标,得到候选目标队列;
    判断所述候选目标队列中的各个所述候选目标是否为旁道目标,其中,所述旁道目标为与所述当前车辆行驶在不同车道的目标;
    若所述候选目标为所述旁道目标,则从所述候选目标队列中删除所述旁道目标,得到更新后的候选目标队列;
    判断所述更新后的候选目标队列中的候选目标是否为所述当前车辆的前一车辆;
    若所述更新后的候选目标队列中的所述候选目标为所述当前车辆的前一车辆,则判断出所述当前车辆的前一车辆发生运动。
  8. 根据权利要求7所述的处理方法,其特征在于,基于各个所述目标对象的运动轨迹确定各个所述目标对象的运行方向包括:
    若所述目标对象的运动轨迹的拟合曲线曲率大于预设曲率阈值,则确定所述目标对象的运行方向为转弯;
    若所述目标对象的运动轨迹的拟合曲线曲率不大于所述预设曲率阈值,则确定所述目标对象的运行方向为直行。
  9. 根据权利要求8所述的处理方法,其特征在于,利用所述目标对象的运行方向和运动轨迹的长度判断所述目标对象是否为正在运动的候选目标包括:
    若所述目标对象的运行方向为转弯,且所述目标对象的运行轨迹的长度大于第一预设长度阈值,则确定所述目标对象为所述正在运动的候选目标;
    若所述目标对象的运行方向为直行,且所述目标对象的运行轨迹的长度大于第二预设长度阈值,则确定所述目标对象为所述正在运动的候选目标。
  10. 根据权利要求7所述的处理方法,其特征在于,判断所述候选目标 队列中的各个所述候选目标是否为旁道目标包括:
    检测所述车头前方图像中的车道线;
    若检测到所述车道线,则判断所述候选目标是否与所述当前车辆行驶在同一车道内,若所述候选目标与所述当前车辆行驶在同一车道内,则所述候选目标不是所述旁道目标;
    若未检测到所述车道线,基于所述候选目标在所述车头前方图像中的位置和所述候选目标的运动轨迹判断所述候选目标是否满足旁道车辆行驶轨迹;
    若所述候选目标满足所述旁道车辆行驶轨迹,则确定所述候选目标为所述旁道目标;
    若所述候选目标不满足所述旁道车辆行驶轨迹,则确定所述候选目标不是所述旁道目标。
  11. 根据权利要求7所述的处理方法,其特征在于,判断所述更新后的候选目标队列中的候选目标是否为所述当前车辆的前一车辆包括:
    通过所述更新后的候选目标队列中的所有候选目标的初始位置和各个所述候选目标之间的相对位置判断所述候选目标是否为所述前一车辆。
  12. 根据权利要求11所述的处理方法,其特征在于,通过所述更新后的候选目标队列中的所有候选目标的初始位置和各个所述候选目标之间的相对位置判断所述候选目标是否为所述前一车辆包括:
    通过所述更新后的候选目标队列中的所有候选目标的初始位置和各个所述候选目标之间的相对位置,确定距离所述车头前方图像的下边缘中点最短的候选目标为所述前一车辆。
  13. 根据权利要求5所述的处理方法,其特征在于,对所述车头前方图像进行车尾检测包括:
    在不同的检测时间段采用不同的车尾检测模型对所述车头前方图像进行车尾检测。
  14. 一种前车起步的处理装置,其特征在于,应用于前车起步的处理系统,所述处理系统包括:视频传感器和重力传感器,所述处理装置包括:
    采集单元,用于通过所述视频传感器采集当前车辆的车头前方图像,以及通过所述重力传感器采集所述当前车辆的加速度信息;
    确定单元,用于基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态;
    获取单元,用于在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目标对象的运动轨迹;
    生成单元,用于根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息。
  15. 根据权利要求14所述的处理装置,其特征在于,所述确定单元包括:
    第一确定模块,用于在所述当前车辆的行驶状态为初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或所述当前车辆的加速度信息确定所述当前车辆的行驶状态为所述静止状态或运动状态;
    第二确定模块,用于在所述当前车辆的行驶状态不为所述初始状态时,基于连续N帧车头前方图像的帧差前景比例和/或帧差前景离散度以及所述当前车辆的加速度信息确定所述当前车辆的行驶状态为所述静止状态或所述运动状态,
    其中,所述车头前方图像的图像特征包括所述帧差前景比例和所述帧差前景离散度。
  16. 根据权利要求14所述的处理装置,其特征在于,所述获取单元包括:
    车尾检测模块,用于对所述车头前方图像进行车尾检测,得到所述车头前方图像中的一个或多个目标对象;
    目标跟踪模块,用于对检测到的各个所述目标对象进行目标跟踪,得到各个所述目标对象的运动轨迹。
  17. 根据权利要求14所述的处理装置,其特征在于,所述生成单元包括:
    判断模块,用于基于各个所述目标对象的运动轨迹判断所述当前车辆的前一车辆是否发生运动;
    生成模块,用于若所述当前车辆的前一车辆发生运动,则生成所述用于提醒所述当前车辆启动的提醒信息。
  18. 根据权利要求17所述的处理装置,其特征在于,所述判断模块包括:
    确定子模块,用于基于各个所述目标对象的运动轨迹确定各个所述目标对象的运行方向;
    第一判断子模块,用于利用所述目标对象的运行方向和运动轨迹的长度判断所述目标对象是否为正在运动的候选目标,得到候选目标队列;
    第二判断子模块,用于判断所述候选目标队列中的各个所述候选目标是否为旁道目标,其中,所述旁道目标为与所述当前车辆行驶在不同车道的目标;
    删除子模块,用于若所述候选目标为所述旁道目标,则从所述候选目标队列中删除所述旁道目标,得到更新后的候选目标队列;
    第三判断子模块,用于判断所述更新后的候选目标队列中的候选目标是否为所述当前车辆的前一车辆;
    第四判断子模块,用于若所述更新后的候选目标队列中的所述候选目标为所述当前车辆的前一车辆,则判断出所述当前车辆的前一车辆发生运动。
  19. 一种前车起步的处理系统,其特征在于,包括:
    视频传感器,安装在当前车辆的前挡风玻璃上,所述视频传感器与所述当前车辆的后视镜位于同一水平线,用于采集所述当前车辆的车头前方图像;
    重力传感器,用于采集所述当前车辆的加速度信息;
    处理器,与所述视频传感器和所述重力传感器连接,用于基于所述车头前方图像的图像特征和/或所述加速度信息确定所述当前车辆的行驶状态,在所述当前车辆的行驶状态为静止状态时,获取所述车头前方图像中目标对象的运动轨迹,并根据所述目标对象的运动轨迹判断是否生成用于提醒所述当前车辆启动的提醒信息。
  20. 根据权利要求19所述的处理系统,其特征在于,所述处理系统还包括:
    提醒单元,与所述处理器连接,用于将所述提醒信息以声音和/或图像方式输出。
  21. 一种应用程序,其特征在于,所述应用程序用于在运行时执行权利 要求1-13任一项所述的前车起步的处理方法。
  22. 一种存储介质,其特征在于,所述存储介质用于存储应用程序,所述应用程序用于在运行时执行权利要求1-13任一项所述的前车起步的处理方法。
PCT/CN2016/085438 2015-10-23 2016-06-12 前车起步处理方法、装置和系统 WO2017067187A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16856628.9A EP3367361B1 (en) 2015-10-23 2016-06-12 Method, device and system for processing startup of front vehicle
US15/770,280 US10818172B2 (en) 2015-10-23 2016-06-12 Method, device and system for processing startup of preceding vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510700316.6A CN106611512B (zh) 2015-10-23 2015-10-23 前车起步的处理方法、装置和系统
CN201510700316.6 2015-10-23

Publications (1)

Publication Number Publication Date
WO2017067187A1 true WO2017067187A1 (zh) 2017-04-27

Family

ID=58556675

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/085438 WO2017067187A1 (zh) 2015-10-23 2016-06-12 前车起步处理方法、装置和系统

Country Status (4)

Country Link
US (1) US10818172B2 (zh)
EP (1) EP3367361B1 (zh)
CN (1) CN106611512B (zh)
WO (1) WO2017067187A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109798913A (zh) * 2019-03-12 2019-05-24 深圳市天之眼高新科技有限公司 导航界面的显示方法、装置及存储介质
CN111149142A (zh) * 2017-09-28 2020-05-12 株式会社电装 控制对象车辆设定装置、控制对象车辆设定系统及控制对象车辆设定方法
CN111284490A (zh) * 2018-12-06 2020-06-16 海信集团有限公司 车载双目相机检测前车溜车的方法及车载双目相机
CN113469126A (zh) * 2021-07-23 2021-10-01 浙江大华技术股份有限公司 一种运动状态检测方法、装置、检测设备及存储介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019017198A1 (ja) * 2017-07-19 2019-01-24 株式会社デンソー 車両用表示装置及び表示制御装置
CN108556851A (zh) * 2017-12-14 2018-09-21 浙江鼎奕科技发展有限公司 一种前车起步提示系统及方法
CN108564681B (zh) * 2018-04-17 2021-12-31 百度在线网络技术(北京)有限公司 数据处理方法、装置、计算设备、程序产品和存储介质
CN108877231A (zh) * 2018-07-07 2018-11-23 程子涵 一种基于车辆起停提醒的车车互联方法及系统
CN110717361A (zh) * 2018-07-13 2020-01-21 长沙智能驾驶研究院有限公司 本车停车检测方法、前车起步提醒方法及存储介质
CN111469839A (zh) * 2019-01-22 2020-07-31 上海汽车集团股份有限公司 一种自动跟随驾驶的方法及装置
CN112750321A (zh) * 2019-10-31 2021-05-04 奥迪股份公司 辅助当前车辆通过道路交叉口的系统、方法和存储介质
CN112750297B (zh) * 2019-10-31 2022-09-16 广州汽车集团股份有限公司 一种车队控制方法、调度处理系统、车辆及控制系统
CN110675645A (zh) * 2019-11-01 2020-01-10 常州工学院 一种监测方法及装置
CN111767850A (zh) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 突发事件的监控方法、装置、电子设备和介质
CN112046499A (zh) * 2020-09-11 2020-12-08 中国第一汽车股份有限公司 一种车辆起步提醒方法、车辆起步提醒装置及车辆
CN112165682B (zh) * 2020-09-25 2021-12-28 上海龙旗科技股份有限公司 一种用于车载设备的上报位置的方法、系统及设备
WO2022178802A1 (zh) * 2021-02-26 2022-09-01 华为技术有限公司 前车起步检测方法及装置
CN113119992A (zh) * 2021-04-30 2021-07-16 东风汽车集团股份有限公司 一种智能前车驶离提醒方法和系统
CN113830085B (zh) * 2021-09-26 2024-02-13 上汽通用五菱汽车股份有限公司 车辆跟停起步方法、装置、设备及计算机可读存储介质
CN116691626B (zh) * 2023-08-08 2023-10-31 徐州奥特润智能科技有限公司 基于人工智能的车辆制动系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102022201A (zh) * 2009-09-21 2011-04-20 福特环球技术公司 用于增强的起动性能的辅助直接起动发动机控制
CN202175009U (zh) * 2011-07-29 2012-03-28 富士重工业株式会社 车辆用驾驶辅助装置
KR101344056B1 (ko) * 2013-09-25 2014-01-16 주식회사 피엘케이 테크놀로지 차량의 출발 정지 지원 장치 및 그 방법
CN104508720A (zh) * 2012-08-01 2015-04-08 丰田自动车株式会社 驾驶辅助装置
CN104827968A (zh) * 2015-04-08 2015-08-12 上海交通大学 一种基于安卓的低成本停车等待驾驶提醒系统

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09109724A (ja) * 1995-10-17 1997-04-28 Mitsubishi Electric Corp 車両発進警報装置
DE19942371A1 (de) * 1999-09-04 2001-03-08 Valeo Schalter & Sensoren Gmbh Verfahren und Vorrichtung zum Erkennen des möglichen Anfahrens eines Kraftfahrzeugs
JP2003515827A (ja) 1999-11-26 2003-05-07 モービルアイ インク 輸送手段の動きのパスに沿って記録された連続イメージを使用して、移動する輸送手段のエゴモーションを予測するためのシステムおよび方法
US6437690B1 (en) * 2000-09-27 2002-08-20 Pathfins C. Okezie Uninsured and/or stolen vehicle tracking system
DE102005033087A1 (de) * 2005-07-15 2007-01-25 Robert Bosch Gmbh Verfahren und Vorrichtung zur Vermeidung von Auffahrunfällen
DE102006009654A1 (de) 2006-03-02 2007-11-08 Robert Bosch Gmbh Vorrichtung zum An- und Abschalten eines Fahrzeugmotors in Abhängigkeit von der Verkehrssituation
US7313869B1 (en) * 2006-07-18 2008-01-01 Snap-On Incorporated Vehicle wheel alignment system and methodology
DE102009050519A1 (de) * 2009-10-23 2011-04-28 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur Fahrerinformation
CN101875348B (zh) * 2010-07-01 2011-12-28 浙江工业大学 基于计算机视觉的误将油门当刹车错误操作防止装置
JP5645769B2 (ja) * 2011-08-01 2014-12-24 株式会社日立製作所 画像処理装置
CN202716801U (zh) * 2012-07-04 2013-02-06 戴奇 车辆启动提醒装置
TWI536326B (zh) * 2012-07-20 2016-06-01 緯創資通股份有限公司 車輛碰撞事故通報系統及方法
DE102012022150A1 (de) * 2012-11-10 2014-05-15 Audi Ag Kraftfahrzeug und Verfahren zum Betrieb eines Kraftfahrzeugs
JP6082638B2 (ja) * 2013-03-29 2017-02-15 日立オートモティブシステムズ株式会社 走行制御装置及び走行制御システム
US9912916B2 (en) * 2013-08-13 2018-03-06 GM Global Technology Operations LLC Methods and apparatus for utilizing vehicle system integrated remote wireless image capture
US11081008B2 (en) * 2013-12-20 2021-08-03 Magna Electronics Inc. Vehicle vision system with cross traffic detection
US9447741B2 (en) * 2014-01-17 2016-09-20 Ford Global Technologies, Llc Automatic engine start-stop control
DE102014203806A1 (de) * 2014-03-03 2015-09-03 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur Vorhersage eines voraussichtlichen Anfahrzeitpunktes eines Fahrzeugs
US10262421B2 (en) * 2014-08-04 2019-04-16 Nec Corporation Image processing system for detecting stationary state of moving object from image, image processing method, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102022201A (zh) * 2009-09-21 2011-04-20 福特环球技术公司 用于增强的起动性能的辅助直接起动发动机控制
CN202175009U (zh) * 2011-07-29 2012-03-28 富士重工业株式会社 车辆用驾驶辅助装置
CN104508720A (zh) * 2012-08-01 2015-04-08 丰田自动车株式会社 驾驶辅助装置
KR101344056B1 (ko) * 2013-09-25 2014-01-16 주식회사 피엘케이 테크놀로지 차량의 출발 정지 지원 장치 및 그 방법
CN104827968A (zh) * 2015-04-08 2015-08-12 上海交通大学 一种基于安卓的低成本停车等待驾驶提醒系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3367361A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111149142A (zh) * 2017-09-28 2020-05-12 株式会社电装 控制对象车辆设定装置、控制对象车辆设定系统及控制对象车辆设定方法
CN111149142B (zh) * 2017-09-28 2022-09-27 株式会社电装 控制对象车辆设定装置及其系统、方法
CN111284490A (zh) * 2018-12-06 2020-06-16 海信集团有限公司 车载双目相机检测前车溜车的方法及车载双目相机
CN111284490B (zh) * 2018-12-06 2021-06-04 海信集团有限公司 车载双目相机检测前车溜车的方法及车载双目相机
CN109798913A (zh) * 2019-03-12 2019-05-24 深圳市天之眼高新科技有限公司 导航界面的显示方法、装置及存储介质
CN113469126A (zh) * 2021-07-23 2021-10-01 浙江大华技术股份有限公司 一种运动状态检测方法、装置、检测设备及存储介质

Also Published As

Publication number Publication date
US20190057604A1 (en) 2019-02-21
EP3367361A4 (en) 2019-07-31
CN106611512A (zh) 2017-05-03
US10818172B2 (en) 2020-10-27
EP3367361A1 (en) 2018-08-29
EP3367361B1 (en) 2024-01-17
CN106611512B (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2017067187A1 (zh) 前车起步处理方法、装置和系统
CN108725440B (zh) 前向碰撞控制方法和装置、电子设备、程序和介质
KR102453627B1 (ko) 딥러닝 기반 교통 흐름 분석 방법 및 시스템
US10627228B2 (en) Object detection device
JP4987573B2 (ja) 車外監視装置
CN112349144B (zh) 一种基于单目视觉的车辆碰撞预警方法及系统
JP4692344B2 (ja) 画像認識装置
CN111932901B (zh) 道路车辆跟踪检测设备、方法及存储介质
JP6650596B2 (ja) 車両状況判定装置、車両状況判定方法、および車両状況判定プログラム
JPH11275562A (ja) 移動人物監視装置
EP2928178A1 (en) On-board control device
IT201600094414A1 (it) Un procedimento per rilevare un veicolo in sorpasso, relativo sistema di elaborazione, sistema di rilevazione di un veicolo in sorpasso e veicolo
WO2020154990A1 (zh) 目标物体运动状态检测方法、设备及存储介质
CN110598511A (zh) 一种对闯绿灯事件检测方法、装置、电子设备及系统
KR101986734B1 (ko) 차량 운전 보조 장치 및 이의 안전 운전 유도 방법
KR102714691B1 (ko) 카메라 영상의 뎁쓰-맵을 이용한 이동 차량의 장애물 검출 장치
JP2010132056A (ja) 検知装置、検知方法および車両制御装置
CN104239847B (zh) 行车警示方法及车用电子装置
JP2005316607A (ja) 画像処理装置及び画像処理方法
JP6820075B2 (ja) 乗員数検知システム、乗員数検知方法、およびプログラム
JP6822571B2 (ja) 端末装置、危険予測方法、プログラム
JP2009146153A (ja) 移動体検出装置、移動体検出方法および移動体検出プログラム
JP2017167608A (ja) 物体認識装置、物体認識方法及び物体認識プログラム
CN113255612A (zh) 前车起步提醒方法及系统、电子设备和存储介质
KR102448164B1 (ko) 차량용 레이더 제어 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16856628

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016856628

Country of ref document: EP