[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108550258B - Vehicle queuing length detection method and device, storage medium and electronic equipment - Google Patents

Vehicle queuing length detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108550258B
CN108550258B CN201810274230.5A CN201810274230A CN108550258B CN 108550258 B CN108550258 B CN 108550258B CN 201810274230 A CN201810274230 A CN 201810274230A CN 108550258 B CN108550258 B CN 108550258B
Authority
CN
China
Prior art keywords
model
vehicle
road
road model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810274230.5A
Other languages
Chinese (zh)
Other versions
CN108550258A (en
Inventor
邹博
冯天娇
唐闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201810274230.5A priority Critical patent/CN108550258B/en
Publication of CN108550258A publication Critical patent/CN108550258A/en
Application granted granted Critical
Publication of CN108550258B publication Critical patent/CN108550258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a vehicle queuing length detection method, a device, a storage medium and electronic equipment, relating to the field of image recognition, wherein the method comprises the following steps: the method comprises the steps of obtaining a road model of each frame of image according to a multi-frame image collected by a camera device for a target intersection, carrying out foreground extraction on each frame of image to identify a vehicle model in each road model, obtaining a vehicle motion model corresponding to a first road model according to the light stream change of the first road model of the multi-frame image, wherein the vehicle motion model is used for describing the motion track of the vehicle model in the road model, the first road model corresponds to a first lane in the target intersection, the first lane is any lane in the target intersection, and obtaining the vehicle queuing length of the first lane according to the vehicle model in the first road model and the vehicle motion model of the first road model. The method and the device can improve the efficiency and the accuracy of vehicle queue length detection.

Description

Vehicle queuing length detection method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image recognition, and in particular, to a method and an apparatus for detecting a vehicle queue length, a storage medium, and an electronic device.
Background
In the related technical field, with the increasing of the automobile holding amount in China and the rapid development of urban roads, vehicles become indispensable transportation tools in daily trips of people, and meanwhile, a large number of automobiles also cause the problem of traffic jam, so that the environment is polluted, and time and economic waste is brought to trips of people. When congestion occurs in a road, if the vehicle queuing length can be timely and accurately detected, the vehicles can be dredged by means of predicting congestion time, updating road condition information, flexibly controlling signal lamps and the like, the utilization rate of the road is improved, and traffic congestion is reduced. In the prior art, detection is usually performed only for a single intersection, the efficiency is low, and the detection result is inaccurate due to reasons such as shading and light variation.
Disclosure of Invention
The invention aims to provide a vehicle queuing length detection method, a vehicle queuing length detection device, a storage medium and electronic equipment, which are used for solving the problems of low vehicle queuing length detection efficiency and low accuracy.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a vehicle queue length detection method, the method including:
acquiring a road model of each frame of image according to a plurality of frames of images collected by a camera aiming at a target intersection, wherein the road model of each frame of image is one or more and corresponds to the image information of one or more lanes contained in each frame of image respectively;
performing foreground extraction on each frame of image to identify a vehicle model in each road model;
acquiring a vehicle motion model corresponding to a first road model according to the optical flow change of the first road model of the multi-frame images, wherein the vehicle motion model is used for describing the motion track of the vehicle model in the road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any lane in the target intersection;
and obtaining the vehicle queuing length of the first lane according to the vehicle model in the first road model and the vehicle motion model of the first road model.
Optionally, the obtaining, according to the vehicle model in the first road model and the vehicle motion model of the first road model, the vehicle queuing length of the first lane corresponding to the first road model includes:
when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area and the speed in the vehicle motion model of the first road model is lower than a preset threshold value, determining that the area where the vehicle model in the first road model is located is a parking area;
and acquiring the vehicle queuing length of the first lane according to the first road model and the size of the parking area.
Optionally, the obtaining a road model of each frame of image according to a plurality of frames of images collected by the camera device for the target intersection includes:
respectively carrying out perspective transformation on the multiple frames of images according to a preset projection matrix to obtain a road model of each frame of image;
and carrying out normalization processing on the road model of each frame of image, wherein the normalization processing comprises the following steps: scale normalization and/or brightness normalization.
Optionally, the scale normalization includes:
normalizing the size of each road model according to the ratio of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane;
the brightness normalization includes: and carrying out normalization processing on the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
Optionally, the performing foreground extraction on each frame of image to identify a vehicle model in each road model in each frame of image includes:
comparing each pixel point in each road model of a first image with a background sample set through a visual background extraction ViBe algorithm, and determining a foreground pixel point in each road model of the first image, wherein the first image is any one frame image in the multi-frame images;
and determining a vehicle model in each road model of the first image according to all the foreground pixel points in each road model of the first image.
Optionally, the comparing, by using the ViBe algorithm, each pixel point in each road model of the first image with the background sample set to determine a foreground pixel point in each road model of the first image includes:
acquiring the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in the first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel;
when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point;
after all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as the foreground pixel points.
Optionally, the performing foreground extraction on each frame of image to identify a vehicle model in each road model further includes:
acquiring a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, wherein the vehicle texture model is used for indicating the contour of the vehicle model in the first road model;
and correcting the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
Optionally, the performing foreground extraction on each frame of image to identify a vehicle model in each road model further includes:
acquiring a shadow area of the first road model according to the average brightness of the first road model;
correcting a shadow area of the first road model according to a vehicle texture model corresponding to the first road model;
and correcting the vehicle model in the first road model according to the corrected shadow area of the first road model.
Optionally, the comparing, by using the ViBe algorithm, each pixel point in each road model of the first image with the background sample set to determine a foreground pixel point in each road model of the first image, further includes:
increasing the quantity threshold value when the first pixel point belongs to the vehicle texture model of the first road model;
and when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
Optionally, the obtaining, according to the optical flow change of the first road model of the multiple frames of images, a vehicle motion model corresponding to the first road model includes:
determining a motion direction angle and a motion speed of each pixel point according to the optical flow change of each pixel point in the first road model of the multi-frame image;
and obtaining a vehicle motion model corresponding to the first road model according to the motion direction angle and the motion speed of each pixel point.
Optionally, the obtaining, according to the optical flow change of the first road model of the multiple frames of images, a vehicle motion model corresponding to the first road model further includes:
determining a road structure model according to the average value of the motion direction angle and the motion speed of each pixel point, wherein the road structure model is used for indicating the motion characteristics of the vehicle model included in the road model;
and correcting the vehicle motion model according to the road structure model.
Optionally, the obtaining the vehicle queuing length of the first lane according to the first road model and the length of the parking area includes:
and converting the length of the parking area in the first road model into the actual length of the parking area on the first lane according to a preset calibration matrix to be used as the vehicle queuing length.
According to a second aspect of the embodiments of the present disclosure, there is provided a vehicle queue length detection apparatus, the apparatus including:
the road model acquisition module is used for acquiring a road model of each frame of image according to a plurality of frames of images acquired by the camera equipment for the target intersection, wherein the number of the road models of each frame of image is one or more, and the one or more road models respectively correspond to the image information of one or more lanes contained in each frame of image;
the vehicle model identification module is used for carrying out foreground extraction on each frame of image so as to identify a vehicle model in each road model;
a vehicle motion model obtaining module, configured to obtain a vehicle motion model corresponding to a first road model of the multiple frames of images according to an optical flow change of the first road model, where the vehicle motion model is used to describe a motion trajectory of the vehicle model in a road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any one lane in the target intersection;
and the length obtaining module is used for obtaining the vehicle queuing length of the first lane according to the vehicle model in the first road model and the vehicle motion model of the first road model.
Optionally, the length obtaining module includes:
the parking area determining submodule is used for determining that an area where the vehicle model in the first road model is located is a parking area when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area and the speed in the vehicle motion model of the first road model is lower than a preset threshold value;
and the length obtaining submodule is used for obtaining the vehicle queuing length of the first lane according to the first lane model and the size of the parking area.
Optionally, the road model obtaining module includes:
the perspective transformation submodule is used for respectively carrying out perspective transformation on the multiple frames of images according to a preset projection matrix so as to obtain a road model of each frame of image;
the normalization submodule is used for performing normalization processing on the road model of each frame of image, and the normalization processing comprises the following steps: scale normalization and/or brightness normalization.
Optionally, the scale normalization includes:
normalizing the size of each road model according to the ratio of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane;
the brightness normalization includes: and carrying out normalization processing on the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
Optionally, the vehicle model identification module includes:
the foreground extraction submodule is used for comparing each pixel point in each road model of a first image with a background sample set through a visual background extraction ViBe algorithm, and determining a foreground pixel point in each road model of the first image, wherein the first image is any one frame image in the multi-frame images;
and the identification submodule is used for determining a vehicle model in each road model of the first image according to all the foreground pixel points in each road model of the first image.
Optionally, the foreground extraction sub-module is configured to:
acquiring the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in the first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel;
when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point;
after all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as the foreground pixel points.
Optionally, the vehicle model identification module further includes:
the texture model obtaining sub-module is used for obtaining a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, and the vehicle texture model is used for indicating the contour of a vehicle model in the first road model;
and the first correction submodule is used for correcting the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
Optionally, the vehicle model identification module further includes:
the shadow area acquisition sub-module is used for acquiring a shadow area of the first road model according to the average brightness of the first road model;
the second correction submodule is used for correcting the shadow area of the first road model according to the vehicle texture model corresponding to the first road model;
the second correction submodule is further configured to correct a vehicle model in the first road model according to the corrected shadow region of the first road model.
Optionally, the foreground extraction sub-module is further configured to:
increasing the quantity threshold value when the first pixel point belongs to the vehicle texture model of the first road model;
and when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
Optionally, the vehicle motion model obtaining module includes:
the first obtaining submodule is used for determining the motion direction angle and the motion speed of each pixel point according to the optical flow change of each pixel point in the first road model of the multi-frame images;
and the second obtaining submodule is used for obtaining a vehicle motion model corresponding to the first road model according to the motion direction angle and the motion speed of each pixel point.
Optionally, the vehicle motion model obtaining module further includes:
the road structure model obtaining submodule is used for determining a road structure model according to the average value of the motion direction angle and the motion speed of each pixel point, and the road structure model is used for indicating the motion characteristics of the vehicle model included in the road model;
and the correction submodule is used for correcting the vehicle motion model according to the road structure model.
Optionally, the length obtaining sub-module is configured to: and converting the length of the parking area in the first road model into the actual length of the parking area on the first lane according to a preset calibration matrix to be used as the vehicle queuing length.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle queue length detection method provided by the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: the computer-readable storage medium provided by the third aspect; and one or more processors to execute the computer program in the computer-readable storage medium.
According to the technical scheme, the method comprises the steps of firstly collecting multi-frame images of one or more intersections through the camera equipment, obtaining road models corresponding to one or more lanes in each frame of image according to the multi-frame images, then carrying out foreground extraction on each frame of image to identify vehicle models in each road model, further obtaining corresponding vehicle motion models according to light stream changes in each road model, and finally obtaining the vehicle queuing length of the lanes corresponding to each road model by combining the vehicle models in each road model and information in the vehicle motion models corresponding to each road model.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a vehicle queue length detection method according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 6 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 7 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 8 is a flow chart illustrating another vehicle queue length detection method in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating a vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 11 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
FIG. 16 is a block diagram illustrating another vehicle queue length detection apparatus in accordance with an exemplary embodiment;
fig. 17 is a block diagram illustrating an electronic device 7 according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before introducing the vehicle queue length detection method, apparatus, storage medium, and electronic device provided by the present disclosure, an application scenario related to various embodiments of the present disclosure is first introduced. The application scene can be an intersection (the intersection can contain a plurality of directions) where a high-angle camera device is installed. The image capturing device may be a video camera, a high-speed camera, or the like, and the image information includes the vehicle on the road and the background information.
FIG. 1 is a flow chart illustrating a vehicle queue length detection method, as shown in FIG. 1, according to an exemplary embodiment, including:
step 101, acquiring a road model of each frame of image according to a plurality of frames of images collected by a camera device for a target intersection, wherein the number of the road models of each frame of image is one or more, and the one or more road models correspond to image information of one or more lanes contained in each frame of image respectively.
For example, at a target intersection where the vehicle queuing length needs to be detected, one or more high-angle camera devices are arranged, and images of the target intersection can be shot at multiple angles and multiple angles under the condition of no occlusion. The target intersection may have one or more directions, such as four directions of an intersection, three directions of a T-junction, or one direction of a one-way road, and each direction of the target intersection may include one or more lanes. According to the information contained in the multi-frame images, the road model corresponding to one or more lanes in each frame of image can be obtained. It should be noted that, the road model is obtained by performing unified transformation on the view angle and the scale of a frame of image to obtain image information of an area where a certain road is located in the image, and therefore, one road model corresponds to one lane. That is, a region of a lane at a target intersection corresponding to a certain frame image (an image obtained by performing the above-described perspective and scale conversion) can be regarded as a road model corresponding to the lane in the frame image. The collected multi-frame images can be obtained by shooting the same intersection at different angles in real time through the high-angle camera device, so that the obtained multi-frame images can comprise multi-frame images of the same intersection at different angles at different moments, wherein the multi-frame images shot at different angles for the same intersection are obtained at each moment. In addition, it is worth explaining that collecting images at different angles can improve the accuracy of identification, but optionally, for the same intersection, multi-frame images of the same intersection at the same angle at different times can be shot as the multi-frame images for obtaining the road model.
And 102, performing foreground extraction on each frame of image to identify a vehicle model in each road model.
For example, foreground extraction is a method of extracting a moving foreground by screening pixels in an image, performing foreground extraction on each frame of image, identifying foreground pixels in each frame of image, correspondingly dividing a large number of foreground pixels into one or more sets according to the aggregation degree of the foreground pixels, and using one or more sets formed by the foreground pixels as vehicle models in a road model, wherein one road model may include one or more vehicle models corresponding to one or more vehicles on a lane. The vehicle model can be understood as an image of a vehicle in the road model, and the vehicle contour in the road model can be accurately identified through the foreground extraction method.
103, acquiring a vehicle motion model corresponding to a first road model according to the optical flow change of the first road model of the multi-frame images, wherein the vehicle motion model is used for describing the motion track of the vehicle model in the road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any lane in the target intersection.
For example, the optical flow change can reflect the motion state of the pixels in the image through the time domain change and the correlation of the pixels, that is, each pixel in the image corresponds to a velocity vector, and the image can be dynamically analyzed according to the velocity vector characteristics of the pixels. Therefore, by analyzing the optical flow change of the first road model of the multiple frames of images, the vehicle motion model corresponding to the first road model can be obtained and used for reflecting the motion trail of the vehicle model in the first road model, namely the motion trail of the vehicle in the first lane.
And 104, acquiring the vehicle queuing length of the first road according to the vehicle model in the first road model and the vehicle motion model of the first road model.
For example, in any frame of image, the vehicle model in any road model is an image for describing the outline of the vehicle in the road model, so that the area where the vehicle is in the road model can be accurately displayed; on the other hand, the vehicle motion model is used for describing the vehicle motion track in the corresponding road model, and can also display the area of the vehicle motion track in the road model, so that the vehicle model in each road model and the information in the vehicle motion model corresponding to each road model can be combined, when the area of the vehicle model in the same road model is matched with the area of the corresponding vehicle motion model (for example, the overlapped area reaches a certain degree), and if the motion track represented by the vehicle motion model is close to a static state or in a static state, the parking area can be determined according to the area of the road model and the vehicle motion model, so that the vehicle queuing length of the lane corresponding to the road model can be further obtained.
In summary, according to the present disclosure, a camera device is used to collect multi-frame images of one or more intersections, a road model corresponding to one or more lanes included in each frame of image is obtained according to the multi-frame images, a foreground extraction is performed on each frame of image to identify a vehicle model in each road model, a corresponding vehicle motion model is further obtained according to a light stream change in each road model, and finally, a vehicle queuing length of a lane corresponding to each road model is obtained by combining the vehicle model in each road model and information in the vehicle motion model corresponding to each road model, so that vehicle information in a road can be extracted, the vehicle queuing lengths of the lanes are monitored in real time, and efficiency and accuracy of vehicle queuing length detection are improved.
FIG. 2 is a flow chart illustrating another vehicle queue length detection method according to an exemplary embodiment, as shown in FIG. 2, step 101 comprising:
and step 1011, respectively carrying out perspective transformation on the multiple frames of images according to a preset projection matrix to obtain a road model of each frame of image.
For example, the preset projection matrix is a matrix determined according to information such as the position of a target intersection, the position of the camera device, the width of a lane, and the like, and the lane in the image can be projected onto a uniform view plane through perspective transformation, that is, the road model of each frame of image is obtained.
Step 1012, performing normalization processing on the road model of each frame of image, where the normalization processing includes: scale normalization and/or brightness normalization.
Wherein, the scale normalization can be realized by the following steps:
and normalizing the size of each road model according to the proportion of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane.
The scale normalization converts the same lane with multiple visual angles and multiple angles into the same calibration scale, so that the adaptability of the vehicle queuing length detection method can be enhanced.
Luminance normalization can be achieved by:
and normalizing the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
For example, the global average brightness in a frame image (corresponding to the matrix I) may be calculated first, then the frame image is divided into N × M regions, the average brightness matrices D of the N × M regions are calculated respectively, then the difference matrices E between the N × M average brightness and the global average brightness are obtained, finally the difference matrices E are interpolated into the brightness distribution matrices R having the same size as the frame image by using a bicubic interpolation method, and I-R is used as the image subjected to brightness normalization processing.
FIG. 3 is a flow chart illustrating another vehicle queue length detection method, according to an exemplary embodiment, as shown in FIG. 3, step 102 includes:
step 1021, comparing each pixel point in each road model of the first image with the background sample set through a visual background extraction ViBe algorithm, and determining a foreground pixel point in each road model of the first image, wherein the first image is any one frame image in the multi-frame images.
Illustratively, step 1021 may comprise the steps of:
a. the method comprises the steps of obtaining the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in a first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel.
b. And when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point.
c. After all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as foreground pixel points.
For example, the specific idea of the ViBe (english: visual background extraction) algorithm is to store a sample set for each pixel point in an image, where the sample set includes the historical pixel value of the pixel point and the pixel values of its neighboring points, and then compare each new pixel value with the sample set to determine whether the new pixel point belongs to the background point if the new pixel point is similar to the sample set. Taking the first pixel point as x for example, v (x) represents a pixel value at the x point, m (x) { v1, v2, … …, vn } represents a sample set of the x point, and SR [ v (x) ] represents a pixel value in a region whose radius is a preset distance centered on the x point. When the number of pixels in step a, i.e. the number of elements in SR [ v (x) ] n & (x), is greater than a preset number threshold (for example, 5), it indicates that the first pixel point is very similar to the corresponding sample set, i.e. the first pixel point can be determined as a background pixel point. After all background pixel points in the first road model are determined, determining pixel points except all background pixel points in the first road model as foreground pixel points.
Step 1022, determining a vehicle model in each road model of the first image according to all foreground pixel points in each road model of the first image.
For example, after all foreground pixel points in each road model of the first image are identified, a large number of foreground pixel points may be correspondingly divided into one or more sets according to the aggregation degree of the foreground pixel points in each road model, and the sets correspond to one or more vehicle models in each road model respectively.
FIG. 4 is a flow chart illustrating another vehicle queue length detection method according to an exemplary embodiment, as shown in FIG. 4, step 102 further comprising:
and 1023, acquiring a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, wherein the vehicle texture model is used for indicating the contour of the vehicle model in the first road model.
And step 1024, correcting the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
For example, the outlines of different objects in the image can be distinguished according to the intensity of change of the gray value in the image, so that the corresponding vehicle texture model can be obtained by using the gradient of the gray value of the first road model.
FIG. 5 is a flow chart illustrating another vehicle queue length detection method according to an exemplary embodiment, as shown in FIG. 5, step 102 further comprising:
and 1025, acquiring the shadow area of the first road model according to the average brightness of the first road model.
And step 1026, correcting the shadow area of the first road model according to the vehicle texture model corresponding to the first road model.
And step 1027, correcting the vehicle model in the first road model according to the corrected shadow area of the first road model.
For example, shadows in the image generally affect image recognition, reducing the recognition accuracy, and thus the vehicle model recognition accuracy can be improved by recognizing shadow regions. The method comprises the steps of firstly obtaining a shadow area according to the average brightness of a first road model, then correcting the shadow area according to a vehicle texture model corresponding to the first road model, and finally finding out a vehicle model which is dark in brightness and misjudged as a shadow according to the corrected shadow area, so that the accuracy of identification of the coming vehicle model is improved.
Optionally, step 1021 may further include the steps of:
d. and when the first pixel point belongs to the vehicle texture model of the first road model, increasing the quantity threshold value.
e. And when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
For example, in the foreground pixel extraction process of step 1021, the number threshold may be adjusted by means of the vehicle texture model of the first road model, when the first pixel belongs to the vehicle texture model of the first road model, the probability that the first pixel is not the background pixel is high, at this time, the number threshold may be increased (i.e., the probability that one pixel is determined as the background pixel is reduced), correspondingly, when the first pixel does not belong to the vehicle texture model of the first road model, the probability that the first pixel is the background pixel is high, at this time, the number threshold may be decreased (i.e., the probability that one pixel is determined as the background pixel is increased).
FIG. 6 is a flow chart illustrating another vehicle queue length detection method, according to an exemplary embodiment, as shown in FIG. 6, step 103 includes:
and step 1031, determining the motion direction angle and the motion speed of each pixel point according to the optical flow change of each pixel point in the first road model of the multi-frame image.
And 1032, acquiring a vehicle motion model corresponding to the first road model according to the motion direction angle and the motion speed of each pixel point.
For example, each frame of image may be divided into I × J small blocks, then features used for tracking in the frame of image are selected, for example, any possible pixel point in the frame of image may be selected, or a pixel block composed of a plurality of pixel points may be selected, then estimation is performed according to optical flow change generated by vehicle motion in a plurality of frames of images, a motion direction angle and a motion speed of the features are determined, finally, a model capable of reflecting a motion trajectory of the features is determined according to the motion direction angle and the motion speed of the features, and then the influence of noise is removed through a filtering operation. Therefore, for the first road model, a pixel point in a multi-frame image (for example, K frame) is selected, the motion direction angle and the motion speed of the pixel point are determined according to the optical flow change of the pixel point in the ith (1< i < K) frame image and the ith-1 frame image, the motion track of the pixel point is determined, the motion track of the pixel point is updated according to the information in the (i + 1) frame image and the ith frame image, the number of image frames can be preset, and the precision of the motion track of the pixel point is improved. The motion trail of each pixel point can be obtained through traversal, wherein the background part in the image is static, the corresponding motion direction angle and the corresponding motion speed are zero, and no motion trail exists, so that only the motion trail corresponding to the pixel point of the motion part in the image is obtained, and the motion trail corresponding to the pixel point of the motion part in the image can be used as a vehicle motion model corresponding to the first road model.
FIG. 7 is a flow chart illustrating another vehicle queue length detection method according to an exemplary embodiment, as shown in FIG. 7, step 103 further comprising:
and 1033, determining a road structure model according to the average value of the motion direction angle and the motion speed of each pixel point, wherein the road structure model is used for indicating the motion characteristics of the vehicle model included in the road model.
Step 1034, the vehicle motion model is modified according to the road structure model.
For example, according to statistical display, the motion direction angle and the motion speed of the vehicle when moving conform to gaussian distribution, so that a mixed gaussian model can be obtained according to the average value of the motion direction angle and the motion speed of each pixel in each frame of image (or each frame of image can be divided into I × J small blocks, and I × J mixed gaussian models can be obtained according to the average value of the motion direction angle and the motion speed of each pixel in each small block), and the mixed gaussian model serves as a road structure model and is used for indicating the motion characteristics of the vehicle model included in the road model, namely the motion trend of all vehicles on the lane corresponding to the road model. It should be noted that the road structure model is obtained according to the motion direction angle and the motion speed of each pixel point in a multi-frame image (for example, K frame), and when the 1 st to u (u < K) th frame images in the K frame image are obtained, the obtained road structure model may update the road structure model when the u +1 th frame image is obtained, and then the influence of noise is removed through a filtering operation. The number of image frames may be preset to improve the accuracy of the road structure model. And then, correcting the vehicle motion model according to the road structure model so as to improve the accuracy of the vehicle motion model.
FIG. 8 is a flow chart illustrating another vehicle queue length detection method, according to an exemplary embodiment, as shown in FIG. 8, step 104 includes:
step 1041, when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area, and the speed in the vehicle motion model of the first road model is lower than a preset threshold, determining that the area where the vehicle model in the first road model is located is a parking area.
For example, on one hand, when the speed in the vehicle motion model of the first road model is lower than the preset threshold, it may be determined that the motion trajectory represented by the vehicle motion model is close to a stationary state, the motion trajectory at this time is short and close to the position of the vehicle model in the image, and if the motion trajectory represented by the vehicle motion model is in a motion state, the corresponding motion trajectory is long; on the other hand, when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same region (it can be understood that the region where the vehicle model in the same road model is located substantially coincides with the region where the short motion trajectory of the corresponding vehicle motion model is located, that is, when the overlapping area of the region where the vehicle model is located and the region where the corresponding vehicle motion model is located reaches a certain threshold value), it can be determined that the vehicle model conforms to the motion trajectory represented by the vehicle motion model, that is, when the two conditions of the above two aspects are satisfied simultaneously, it can be determined which vehicle models in the first road model are close to the stationary state and which vehicle models are in motion, and the region where the vehicle models close to the stationary state are located is taken as the parking region of the first road model. The preset threshold may be preset, for example, 3km/h, and may be dynamically adjusted according to actual requirements.
And 1042, acquiring the vehicle queuing length of the first lane according to the first lane model and the size of the parking area.
For example, after the parking area is determined, the actual length of the first lane at the target intersection corresponding to the parking area is obtained according to the position of the parking area on the first lane model and the position of the first lane model in the multi-frame image.
Optionally, step 1042 may be implemented by:
and converting the length of the parking area in the first road model into the actual length of the parking area on the first lane as the vehicle queuing length according to a preset calibration matrix.
For example, the preset calibration matrix is a matrix determined according to the position information (e.g., longitude, latitude, altitude, etc.) of the target intersection and the position information (e.g., longitude, latitude, altitude, etc.) of the camera, and can convert the length in the image acquired by the camera into the actual length on the lane. The length of the parking area in the first road model can be converted into an actual distance corresponding to the first lane by multiplying the parking area by the calibration matrix, and the actual distance is used as the vehicle queuing length.
In summary, according to the present disclosure, a camera device is used to collect multi-frame images of one or more intersections, a road model corresponding to one or more lanes included in each frame of image is obtained according to the multi-frame images, a foreground extraction is performed on each frame of image to identify a vehicle model in each road model, a corresponding vehicle motion model is further obtained according to a light stream change in each road model, and finally, a vehicle queuing length of a lane corresponding to each road model is obtained by combining the vehicle model in each road model and information in the vehicle motion model corresponding to each road model, so that vehicle information in a road can be extracted, the vehicle queuing lengths of the lanes are monitored in real time, and efficiency and accuracy of vehicle queuing length detection are improved.
Fig. 9 is a block diagram illustrating a vehicle queue length detecting apparatus according to an exemplary embodiment, and as shown in fig. 9, the apparatus 200 includes:
the road model obtaining module 201 is configured to obtain a road model of each frame of image according to multiple frames of images collected by the camera device for the target intersection, where the road model of each frame of image is one or more, and corresponds to image information of one or more lanes included in each frame of image.
And the vehicle model identification module 202 is used for performing foreground extraction on each frame of image to identify a vehicle model in each road model.
The vehicle motion model obtaining module 203 is configured to obtain a vehicle motion model corresponding to a first road model according to an optical flow change of the first road model of the multi-frame image, where the vehicle motion model is used to describe a motion track of the vehicle model in the road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any lane in the target intersection.
The length obtaining module 204 is configured to obtain a vehicle queuing length of the first lane according to the vehicle model in the first lane model and the vehicle motion model of the first lane model.
Fig. 10 is a block diagram illustrating another vehicle queue length detecting apparatus according to an exemplary embodiment, and as shown in fig. 10, the length obtaining module 204 includes:
the parking area determining submodule 2041 is configured to determine that an area where the vehicle model in the first road model is located is a parking area when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area and the speed in the vehicle motion model of the first road model is lower than a preset threshold.
The length obtaining submodule 2042 is configured to obtain the vehicle queuing length of the first lane according to the first road model and the size of the parking area.
Fig. 11 is a block diagram illustrating another vehicle queue length detection apparatus according to an exemplary embodiment, and as shown in fig. 11, the road model acquisition module 201 includes:
the perspective transformation submodule 2011 is configured to perform perspective transformation on multiple frames of images according to a preset projection matrix, so as to obtain a road model of each frame of image.
The normalization submodule 2012 is configured to perform normalization processing on the road model of each frame of image, where the normalization processing includes: scale normalization and/or brightness normalization.
Optionally, the scale normalization includes:
and normalizing the size of each road model according to the proportion of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane.
The brightness normalization includes: and normalizing the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
Fig. 12 is a block diagram illustrating another vehicle queue length detection apparatus according to an exemplary embodiment, and as shown in fig. 12, the vehicle model identification module 202 includes:
the foreground extraction submodule 2021 is configured to compare each pixel point in each road model of the first image with the background sample set through a visual background extraction ViBe algorithm, and determine a foreground pixel point in each road model of the first image, where the first image is any one frame image of the multiple frame images.
The identifying submodule 2022 is configured to determine, according to all foreground pixel points in each road model of the first image, a vehicle model in each road model of the first image.
Optionally, the foreground extraction sub-module 2021 is configured to:
the method comprises the steps of obtaining the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in a first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel.
And when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point.
After all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as foreground pixel points.
Fig. 13 is a block diagram illustrating another vehicle queue length detecting apparatus according to an exemplary embodiment, and as shown in fig. 13, the vehicle model identification module 202 further includes:
the texture model obtaining sub-module 2023 is configured to obtain a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, where the vehicle texture model is used to indicate the contour of the vehicle model in the first road model.
The first modification submodule 2024 is configured to modify the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
Fig. 14 is a block diagram illustrating another vehicle queue length detection apparatus according to an exemplary embodiment, and as shown in fig. 14, the vehicle model identification module 202 further includes:
the shadow region obtaining sub-module 2025 is configured to obtain a shadow region of the first road model according to the average brightness of the first road model.
And the second correction sub-module 2026 is configured to correct the shadow area of the first road model according to the vehicle texture model corresponding to the first road model.
The second modification sub-module 2026 is further configured to modify the vehicle model in the first road model according to the modified shadow region of the first road model.
Optionally, the foreground extraction sub-module 2021 is further configured to:
and when the first pixel point belongs to the vehicle texture model of the first road model, increasing the quantity threshold value.
And when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
Fig. 15 is a block diagram illustrating another vehicle queue length detection apparatus according to an exemplary embodiment, and as shown in fig. 15, the vehicle motion model acquisition module 203 includes:
the first obtaining submodule 2031 is configured to determine a motion direction angle and a motion speed of each pixel according to an optical flow change of each pixel in the first road model of the multiple frames of images.
The second obtaining submodule 2032 is configured to obtain, according to the motion direction angle and the motion speed of each pixel point, a vehicle motion model corresponding to the first road model.
Fig. 16 is a block diagram illustrating another vehicle queue length detection apparatus according to an exemplary embodiment, and as shown in fig. 16, the vehicle motion model acquisition module 203 further includes:
the road structure model obtaining sub-module 2033 is configured to determine a road structure model according to the average value of the motion direction angle and the motion speed of each pixel, where the road structure model is used to indicate the motion characteristics of the vehicle model included in the road model.
And a correction submodule 2034, configured to correct the vehicle motion model according to the road structure model.
Optionally, the length obtaining sub-module 2042 is configured to: and converting the length of the parking area in the first road model into the actual length of the parking area on the first lane as the vehicle queuing length according to a preset calibration matrix.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, according to the present disclosure, a camera device is used to collect multi-frame images of one or more intersections, a road model corresponding to one or more lanes included in each frame of image is obtained according to the multi-frame images, a foreground extraction is performed on each frame of image to identify a vehicle model in each road model, a corresponding vehicle motion model is further obtained according to a light stream change in each road model, and finally, a vehicle queuing length of a lane corresponding to each road model is obtained by combining the vehicle model in each road model and information in the vehicle motion model corresponding to each road model, so that vehicle information in a road can be extracted, the vehicle queuing lengths of the lanes are monitored in real time, and efficiency and accuracy of vehicle queuing length detection are improved.
Fig. 17 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. As shown in fig. 17, the electronic device 700 may include: a processor 701, a memory 702, multimedia components 703, input/output (I/O) interfaces 704, and communication components 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the vehicle queue length detection method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the vehicle queue length detection method described above.
In another exemplary embodiment, a computer readable storage medium, such as the memory 702, is also provided that includes program instructions executable by the processor 701 of the electronic device 700 to perform the vehicle queue length detection method described above.
In summary, according to the present disclosure, a camera device is used to collect multi-frame images of one or more intersections, a road model corresponding to one or more lanes included in each frame of image is obtained according to the multi-frame images, a foreground extraction is performed on each frame of image to identify a vehicle model in each road model, a corresponding vehicle motion model is further obtained according to a light stream change in each road model, and finally, a vehicle queuing length of a lane corresponding to each road model is obtained by combining the vehicle model in each road model and information in the vehicle motion model corresponding to each road model, so that vehicle information in a road can be extracted, the vehicle queuing lengths of the lanes are monitored in real time, and efficiency and accuracy of vehicle queuing length detection are improved.
Preferred embodiments of the present disclosure are described in detail above with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and other embodiments of the present disclosure may be easily conceived by those skilled in the art within the technical spirit of the present disclosure after considering the description and practicing the present disclosure, and all fall within the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. Meanwhile, any combination can be made between various different embodiments of the disclosure, and the disclosure should be regarded as the disclosure of the disclosure as long as the combination does not depart from the idea of the disclosure. The present disclosure is not limited to the precise structures that have been described above, and the scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. A vehicle queue length detection method, the method comprising:
acquiring a road model of each frame of image according to a plurality of frames of images collected by a camera aiming at a target intersection, wherein the road model of each frame of image is one or more and corresponds to the image information of one or more lanes contained in each frame of image respectively;
performing foreground extraction on each frame of image to identify a vehicle model in each road model;
acquiring a vehicle motion model corresponding to a first road model according to the optical flow change of the first road model of the multi-frame images, wherein the vehicle motion model is used for describing the motion track of the vehicle model in the road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any lane in the target intersection;
acquiring the vehicle queuing length of the first road according to the vehicle model in the first road model and the vehicle motion model of the first road model;
the obtaining of the vehicle queue length of the first lane corresponding to the first road model according to the vehicle model in the first road model and the vehicle motion model of the first road model includes:
when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area and the speed in the vehicle motion model of the first road model is lower than a preset threshold value, determining that the area where the vehicle model in the first road model is located is a parking area;
and converting the length of the parking area in the first road model into the actual length of the parking area on the first lane according to a preset calibration matrix to be used as the vehicle queuing length.
2. The method according to claim 1, wherein the obtaining of the road model of each frame of image according to the multiple frames of images collected by the camera device for the target intersection comprises:
respectively carrying out perspective transformation on the multiple frames of images according to a preset projection matrix to obtain a road model of each frame of image;
and carrying out normalization processing on the road model of each frame of image, wherein the normalization processing comprises the following steps: scale normalization and/or brightness normalization.
3. The method of claim 2, wherein the scale normalization comprises:
normalizing the size of each road model according to the ratio of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane;
the brightness normalization includes: and carrying out normalization processing on the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
4. The method of claim 1, wherein the foreground extracting of each image frame to identify a vehicle model in each road model in each image frame comprises:
comparing each pixel point in each road model of a first image with a background sample set through a visual background extraction ViBe algorithm, and determining a foreground pixel point in each road model of the first image, wherein the first image is any one frame image in the multi-frame images;
and determining a vehicle model in each road model of the first image according to all the foreground pixel points in each road model of the first image.
5. The method of claim 4, wherein comparing each pixel point in each road model of the first image with a background sample set by the ViBe algorithm to determine a foreground pixel point in each road model of the first image comprises:
acquiring the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in the first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel;
when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point;
after all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as the foreground pixel points.
6. The method of claim 1, wherein the foreground extracting of each frame of image to identify a vehicle model in each of the road models further comprises:
acquiring a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, wherein the vehicle texture model is used for indicating the contour of the vehicle model in the first road model;
and correcting the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
7. The method of claim 6, wherein the foreground extracting of each frame of image to identify a vehicle model in each of the road models further comprises:
acquiring a shadow area of the first road model according to the average brightness of the first road model;
correcting a shadow area of the first road model according to a vehicle texture model corresponding to the first road model;
and correcting the vehicle model in the first road model according to the corrected shadow area of the first road model.
8. The method of claim 5, wherein comparing each pixel point in each of the road models of the first image with a background sample set by the ViBe algorithm to determine a foreground pixel point in each of the road models of the first image, further comprises:
increasing the quantity threshold value when the first pixel point belongs to the vehicle texture model of the first road model;
and when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
9. The method according to claim 1, wherein the obtaining a vehicle motion model corresponding to a first road model of the multiple frames of images according to optical flow variation of the first road model comprises:
determining a motion direction angle and a motion speed of each pixel point according to the optical flow change of each pixel point in the first road model of the multi-frame image;
and obtaining a vehicle motion model corresponding to the first road model according to the motion direction angle and the motion speed of each pixel point.
10. The method according to claim 9, wherein the obtaining a vehicle motion model corresponding to a first road model of the multiple frames of images according to optical flow variation of the first road model further comprises:
determining a road structure model according to the average value of the motion direction angle and the motion speed of each pixel point, wherein the road structure model is used for indicating the motion characteristics of the vehicle model included in the road model;
and correcting the vehicle motion model according to the road structure model.
11. A vehicle queue length detection apparatus, characterized in that the apparatus comprises:
the road model acquisition module is used for acquiring a road model of each frame of image according to a plurality of frames of images acquired by the camera equipment for the target intersection, wherein the number of the road models of each frame of image is one or more, and the one or more road models respectively correspond to the image information of one or more lanes contained in each frame of image;
the vehicle model identification module is used for carrying out foreground extraction on each frame of image so as to identify a vehicle model in each road model;
a vehicle motion model obtaining module, configured to obtain a vehicle motion model corresponding to a first road model of the multiple frames of images according to an optical flow change of the first road model, where the vehicle motion model is used to describe a motion trajectory of the vehicle model in a road model, the first road model corresponds to a first lane in the target intersection, and the first lane is any one lane in the target intersection;
the length obtaining module is used for obtaining the vehicle queuing length of the first road according to a vehicle model in the first road model and a vehicle motion model of the first road model;
the length acquisition module includes:
the parking area determining submodule is used for determining that an area where the vehicle model in the first road model is located is a parking area when the vehicle model in the first road model and the vehicle motion model of the first road model are in the same area and the speed in the vehicle motion model of the first road model is lower than a preset threshold value;
and the length obtaining submodule is used for converting the length of the parking area in the first road model into the actual length of the parking area on the first lane according to a preset calibration matrix to be used as the vehicle queuing length.
12. The apparatus of claim 11, wherein the road model obtaining module comprises:
the perspective transformation submodule is used for respectively carrying out perspective transformation on the multiple frames of images according to a preset projection matrix so as to obtain a road model of each frame of image;
the normalization submodule is used for performing normalization processing on the road model of each frame of image, and the normalization processing comprises the following steps: scale normalization and/or brightness normalization.
13. The apparatus of claim 12, wherein the scale normalization comprises:
normalizing the size of each road model according to the ratio of the size of the lane corresponding to each road model to the size of the vehicle in the corresponding lane;
the brightness normalization includes: and carrying out normalization processing on the brightness of each road model according to the average brightness of each road model and the brightness of each pixel.
14. The apparatus of claim 11, wherein the vehicle model identification module comprises:
the foreground extraction submodule is used for comparing each pixel point in each road model of a first image with a background sample set through a visual background extraction ViBe algorithm, and determining a foreground pixel point in each road model of the first image, wherein the first image is any one frame image in the multi-frame images;
and the identification submodule is used for determining a vehicle model in each road model of the first image according to all the foreground pixel points in each road model of the first image.
15. The apparatus of claim 14, wherein the foreground extraction sub-module is configured to:
acquiring the number of pixels in an intersection of all pixels in the neighborhood of a preset radius of a first pixel in the first road model and a first background sample set, wherein the first pixel is any one pixel in the first road model, and the first background sample set is the background sample set of the first pixel;
when the number of the pixels is larger than a preset number threshold, determining that the first pixel point is a background pixel point;
after all background pixel points in the first road model are obtained, determining pixel points in the first road model except all background pixel points as the foreground pixel points.
16. The apparatus of claim 11, wherein the vehicle model identification module further comprises:
the texture model obtaining sub-module is used for obtaining a vehicle texture model corresponding to the first road model according to the gradient of the gray value of the first road model, and the vehicle texture model is used for indicating the contour of a vehicle model in the first road model;
and the first correction submodule is used for correcting the vehicle model in the first road model according to the vehicle texture model corresponding to the first road model.
17. The apparatus of claim 16, wherein the vehicle model identification module further comprises:
the shadow area acquisition sub-module is used for acquiring a shadow area of the first road model according to the average brightness of the first road model;
the second correction submodule is used for correcting the shadow area of the first road model according to the vehicle texture model corresponding to the first road model;
the second correction submodule is further configured to correct a vehicle model in the first road model according to the corrected shadow region of the first road model.
18. The apparatus of claim 15, wherein the foreground extraction sub-module is further configured to:
increasing the quantity threshold value when the first pixel point belongs to the vehicle texture model of the first road model;
and when the first pixel point does not belong to the vehicle texture model of the first road model, reducing the quantity threshold value.
19. The apparatus of claim 11, wherein the vehicle motion model acquisition module comprises:
the first obtaining submodule is used for determining the motion direction angle and the motion speed of each pixel point according to the optical flow change of each pixel point in the first road model of the multi-frame images;
and the second obtaining submodule is used for obtaining a vehicle motion model corresponding to the first road model according to the motion direction angle and the motion speed of each pixel point.
20. The apparatus of claim 19, wherein the vehicle motion model acquisition module further comprises:
the road structure model obtaining submodule is used for determining a road structure model according to the average value of the motion direction angle and the motion speed of each pixel point, and the road structure model is used for indicating the motion characteristics of the vehicle model included in the road model;
and the correction submodule is used for correcting the vehicle motion model according to the road structure model.
21. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
22. An electronic device, comprising:
the computer-readable storage medium recited in claim 21; and
one or more processors to execute the program in the computer-readable storage medium.
CN201810274230.5A 2018-03-29 2018-03-29 Vehicle queuing length detection method and device, storage medium and electronic equipment Active CN108550258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810274230.5A CN108550258B (en) 2018-03-29 2018-03-29 Vehicle queuing length detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810274230.5A CN108550258B (en) 2018-03-29 2018-03-29 Vehicle queuing length detection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108550258A CN108550258A (en) 2018-09-18
CN108550258B true CN108550258B (en) 2021-01-08

Family

ID=63517499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810274230.5A Active CN108550258B (en) 2018-03-29 2018-03-29 Vehicle queuing length detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108550258B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111590A (en) * 2019-06-04 2019-08-09 南京慧尔视智能科技有限公司 A kind of vehicle dynamic queue length detection method
CN110349415B (en) * 2019-06-26 2021-08-20 江西理工大学 Driving speed measuring method based on multi-scale transformation
CN113129457B (en) * 2019-12-30 2024-02-06 百度在线网络技术(北京)有限公司 Texture generation method, device, equipment and medium
CN113361299B (en) * 2020-03-03 2023-08-15 浙江宇视科技有限公司 Abnormal parking detection method and device, storage medium and electronic equipment
CN111627241B (en) * 2020-05-27 2024-04-09 阿波罗智联(北京)科技有限公司 Method and device for generating intersection vehicle queuing information
WO2022143802A1 (en) * 2020-12-31 2022-07-07 奥动新能源汽车科技有限公司 Identification method and system for number of queuing vehicles in battery swapping station, and device and medium
CN114694084A (en) * 2020-12-31 2022-07-01 奥动新能源汽车科技有限公司 Method, system, equipment and medium for identifying queuing number of power change stations
CN114023062B (en) * 2021-10-27 2022-08-19 河海大学 Traffic flow information monitoring method based on deep learning and edge calculation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100331167B1 (en) * 2000-03-28 2002-04-01 장태수 Method of Measuring Traffic Queue Length Based on Wavelet Transforms at Urban Intersection
CN101699512B (en) * 2009-10-30 2011-09-21 无锡景象数字技术有限公司 Depth generating method based on background difference sectional drawing and sparse optical flow method
CN101710448B (en) * 2009-12-29 2011-08-10 浙江工业大学 Road traffic state detecting device based on omnibearing computer vision
CN103871079B (en) * 2014-03-18 2016-11-09 南京金智视讯技术有限公司 Wireless vehicle tracking based on machine learning and light stream
CN103985250B (en) * 2014-04-04 2016-05-18 浙江工业大学 The holographic road traffic state vision inspection apparatus of lightweight
CN104159088B (en) * 2014-08-23 2017-12-08 中科院成都信息技术股份有限公司 A kind of long-distance intelligent vehicle monitoring system and method
CN105809956B (en) * 2014-12-31 2019-07-12 大唐电信科技股份有限公司 The method and apparatus for obtaining vehicle queue length
CN107274673B (en) * 2017-08-15 2021-01-19 苏州科技大学 Vehicle queuing length measuring method and system based on corrected local variance

Also Published As

Publication number Publication date
CN108550258A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN108550258B (en) Vehicle queuing length detection method and device, storage medium and electronic equipment
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
US10037604B2 (en) Multi-cue object detection and analysis
CN108986465B (en) Method, system and terminal equipment for detecting traffic flow
CN108734105B (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN108877269B (en) Intersection vehicle state detection and V2X broadcasting method
AU2018286592A1 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
US11164031B2 (en) System and method for analyzing an image of a vehicle
CN113468967A (en) Lane line detection method, device, equipment and medium based on attention mechanism
CN110472599B (en) Object quantity determination method and device, storage medium and electronic equipment
KR20180046798A (en) Method and apparatus for real time traffic information provision
Hinz Detection and counting of cars in aerial images
US20220044558A1 (en) Method and device for generating a digital representation of traffic on a road
CN115004273A (en) Digital reconstruction method, device and system for traffic road
CN112906471A (en) Traffic signal lamp identification method and device
CN114898243A (en) Traffic scene analysis method and device based on video stream
CN109903308B (en) Method and device for acquiring information
CN110738867B (en) Parking space detection method, device, equipment and storage medium
CN113160272A (en) Target tracking method and device, electronic equipment and storage medium
CN110969864A (en) Vehicle speed detection method, vehicle driving event detection method and electronic equipment
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN104282157A (en) Main line video traffic detecting method for traffic signal control
CN114241792B (en) Traffic flow detection method and system
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor
CN113989774B (en) Traffic light detection method, device, vehicle and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant