CN115984321A - Speed measuring method, device, equipment and storage medium - Google Patents
Speed measuring method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115984321A CN115984321A CN202211456784.XA CN202211456784A CN115984321A CN 115984321 A CN115984321 A CN 115984321A CN 202211456784 A CN202211456784 A CN 202211456784A CN 115984321 A CN115984321 A CN 115984321A
- Authority
- CN
- China
- Prior art keywords
- target
- vehicle
- preset
- image
- static image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The disclosure provides a speed measuring method, a speed measuring device, speed measuring equipment and a storage medium, relates to the technical field of traffic control, and can improve the adaptivity of speed measurement. The method comprises the following steps: acquiring a video shot by a collecting device on a target road, wherein the video comprises a plurality of frames of static images; under the condition that the multiple frames of static images comprise a target static image, judging whether the shooting visual angle of the target static image is a preset visual angle or not; a vehicle to be detected exists in the target static image; if the shooting visual angle of the acquisition equipment is judged not to be the preset visual angle, updating a preset speed measurement parameter based on the target static image; and acquiring the movement speed of the vehicle to be detected based on the video and the preset speed measurement parameter.
Description
Technical Field
The present disclosure relates to the field of traffic control technologies, and in particular, to a speed measurement method, apparatus, device, and storage medium.
Background
In modern intelligent traffic system construction and road traffic monitoring and management, a vehicle speed measurement algorithm is a basic and common function, and plays a vital role in overspeed monitoring, traffic accident responsibility confirmation and road flow analysis. With the development of artificial intelligence and computer vision technologies, video-based vehicle speed measurement methods with lower cost and easier deployment gradually emerge.
At present, when vehicle speed measurement is carried out based on videos, algorithm parameters need to be determined in advance, for example, parameters of a mapping model of a relation between image coordinates in the videos and a world coordinate system are determined by acquiring information such as a focal length, a pitch angle and resolution of a camera in advance, and in the actual use process, the driving speed of the vehicle can be estimated only by carrying out mapping transformation on the world coordinate system on two-dimensional coordinates of the vehicle by using the mapping model. Once the parameters are determined, the camera cannot move, rotate and zoom, namely, the shooting angle of view must be fixed, otherwise the parameters determined in advance are invalid. The ball machine is more and more widely used in traffic monitoring scenes due to higher flexibility, and the traffic manager can cause the failure of the predetermined parameters by changing the visual angle of the camera at the back station or changing the focal length, zooming and the like according to the needs, so that the accurate speed measurement of the vehicle cannot be ensured.
Disclosure of Invention
In order to solve the problem that the change of the shooting visual angle of a camera in the prior art causes the failure of a speed measurement parameter corresponding to a speed measurement algorithm, so that accurate speed measurement of a vehicle cannot be guaranteed, the speed measurement method, the speed measurement device, the speed measurement equipment and a storage medium are provided, the switching of the shooting visual angle of the camera can be automatically identified, the algorithm parameter can be automatically updated in a new scene, and the adaptability of the speed measurement method is improved.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a method for measuring a speed is provided, and the method includes:
acquiring a video shot by a collecting device on a target road, wherein the video comprises multiple frames of static images;
under the condition that the multi-frame static image comprises the target static image, judging whether the shooting visual angle of the target static image is a preset visual angle or not; a vehicle to be detected exists in the target static image;
if the shooting visual angle of the acquisition equipment is not the preset visual angle, updating the preset speed measurement parameter based on the target static image;
and obtaining the movement speed of the vehicle to be tested based on the video and the preset speed measurement parameters.
With reference to the first aspect, in a possible implementation manner, the determining whether a shooting angle of the target still image is a preset angle includes:
acquiring a standard image shot by the acquisition equipment on the target road based on a preset visual angle, and acquiring standard characteristic points and a standard descriptor of the standard image;
extracting image features of the target static image to obtain target feature points and a target descriptor of the target static image;
performing feature matching based on the target descriptor and the standard descriptor pair, the standard feature points and the target feature points;
when matched feature points exist in the target feature and the preset feature, calculating the position offset between each matched feature point; the matched feature points comprise target feature points with the similarity smaller than the preset similarity and standard feature points;
and when the matched feature points do not exist in the target feature and the preset feature, or when each offset is greater than the preset offset, determining that the shooting visual angle of the target static image is not the preset visual angle.
With reference to the foregoing first aspect, in a possible implementation manner, the method further includes:
and when the matched feature points do not exist in the target feature and the preset feature, or when the offset is greater than the preset offset, taking the target static image as a new standard image.
With reference to the foregoing first aspect, in a possible implementation manner, the method further includes:
when matched feature points exist in the target features and the preset features, acquiring a time difference between the updating time of the standard image and the target time corresponding to the target static image;
and when the time difference is greater than the preset time difference, taking the target static image as a new standard image.
With reference to the first aspect, in a possible implementation manner, updating the preset speed measurement parameter based on the target static image includes:
acquiring a first vanishing point based on a lane line segmentation chart of the target static image; the first vanishing point is the intersection point of two first target fitting straight lines in the lane line segmentation graph; the first target fitting straight line is a straight line which passes through the first target pixel points and has the number larger than or equal to a first preset threshold value; the first target pixel point is a pixel point corresponding to the lane line in the lane line segmentation graph;
determining a motion track and a vehicle type of the vehicle to be detected and an edge image of the vehicle to be detected based on a vehicle detection frame and a video of the vehicle to be detected in the target static image; the edge image is obtained according to the region of interest corresponding to the vehicle detection frame;
obtaining a second vanishing point based on the edge image; the second vanishing point is the intersection point of two second target fitting straight lines in the edge image; the second target fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system larger than or equal to a second preset threshold value and has an included angle with the lane line larger than a preset included angle; the target straight line is obtained based on edge image fitting; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
obtaining a scaling coefficient based on the motion track and the vehicle type of the vehicle; wherein the scaling factor is used for representing the scaling ratio between the coordinates of the target static image and the world coordinates;
and determining the first vanishing point, the second vanishing point and the scaling coefficient as new preset speed measurement parameters.
With reference to the first aspect, in a possible implementation manner, the target static image includes a plurality of vehicles to be tested; each vehicle to be tested corresponds to one target linear system; determining a second vanishing point based on the edge image, including:
determining two second candidate fitting straight lines corresponding to each target straight line system; the second candidate fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target straight line is obtained based on the edge image fitting of the corresponding vehicle to be tested; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
calculating the total number of second target pixel points in the edge image through which the two candidate second target fitting straight lines corresponding to each target straight line system pass;
calculating the product of the total number and the model weight of the vehicle to be tested corresponding to the target linear system to obtain the score of the target linear system, wherein the model weight of the vehicle to be tested is used for expressing the edge definition of the model of the vehicle to be tested;
and determining two second candidate fitting straight lines corresponding to the target straight line system with the highest score as two second target fitting straight lines, and determining the intersection point of the two second target fitting straight lines as a second vanishing point.
With reference to the first aspect, in a possible implementation manner, obtaining a moving speed of a vehicle to be tested based on a video and a preset speed measurement parameter includes:
performing target tracking on the vehicle to be detected based on the video to determine a plurality of positions of the vehicle to be detected in the multi-frame static image of the video;
respectively mapping the plurality of positions to a world coordinate system based on preset speed measurement parameters to obtain the displacement between any two positions;
and acquiring the time difference between any two positions, and calculating the movement speed of the vehicle to be measured based on the time difference and the displacement.
In a second aspect, a speed measuring device is provided, the device comprising:
the video image acquisition module is used for acquiring a video shot by the acquisition equipment on a target road, wherein the video comprises a plurality of frames of static images;
the visual angle judging module is used for judging whether the shooting visual angle of the target static image is a preset visual angle or not under the condition that the multi-frame static images comprise the target static image; a vehicle to be detected exists in the target static image;
the preset speed measurement parameter updating module is used for updating the preset speed measurement parameter based on the target static image if the shooting visual angle of the acquisition equipment is judged not to be the preset visual angle;
and the movement speed determining module is used for acquiring the movement speed of the vehicle to be detected based on the video and the preset speed measurement parameters.
With reference to the second aspect, in a possible implementation manner, the viewing angle determining module includes:
the standard characteristic acquisition unit is used for acquiring a standard image shot by the acquisition equipment on the target road based on a preset visual angle, and acquiring standard characteristic points and a standard descriptor of the standard image;
the target characteristic acquisition unit is used for extracting image characteristics of the target static image to obtain target characteristic points and a target descriptor of the target static image;
the characteristic matching unit is used for carrying out characteristic matching on the basis of the target descriptor, the standard descriptor pair, the standard characteristic points and the target characteristic points;
a position offset calculation unit configured to calculate a position offset between each of the matched feature points when there is a matched feature point in the target feature and the preset feature; the matched feature points comprise target feature points with the similarity smaller than the preset similarity and standard feature points;
and the visual angle judging unit is used for determining that the shooting visual angle of the target static image is not the preset visual angle when the matched characteristic points do not exist in the target characteristic and the preset characteristic or each offset is greater than the preset offset.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes:
and the standard image updating module is used for taking the target static image as a new standard image when the target characteristic and the preset characteristic have no matched characteristic points or the offset is greater than the preset offset.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes:
the time length difference determining module is used for acquiring the time length difference between the updating time of the standard image and the target time corresponding to the target static image when matched feature points exist in the target feature and the preset feature;
and the standard image updating module is also used for taking the target static image as a new standard image when the time difference is greater than the preset time difference.
With reference to the second aspect, in a possible implementation manner, the preset speed measurement parameter updating module includes:
a first vanishing point determining unit, configured to obtain a first vanishing point based on a lane line segmentation map of the target static image; the first vanishing point is the intersection point of two first target fitting straight lines in the lane line segmentation graph; the first target fitting straight line is a straight line which passes through the first target pixel points and has the number larger than or equal to a first preset threshold value; the first target pixel point is a pixel point corresponding to the lane line in the lane line segmentation graph;
the vehicle detection unit is used for determining the motion track and the vehicle type of the vehicle to be detected and the edge image of the vehicle to be detected based on the vehicle detection frame and the video of the vehicle to be detected in the target static image; the edge image is obtained according to the region of interest corresponding to the vehicle detection frame;
a second vanishing point determining unit, configured to obtain a second vanishing point based on the edge image; the second vanishing point is the intersection point of two second target fitting straight lines in the edge image; the second target fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target linear system is obtained based on edge image fitting; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
the scaling coefficient determining unit is used for obtaining a scaling coefficient based on the motion track and the vehicle type of the vehicle; the scaling coefficient is used for representing the scaling ratio between the coordinates of the target static image and the world coordinates;
and the preset speed measurement parameter updating unit is used for determining the first vanishing point, the second vanishing point and the scaling coefficient as new preset speed measurement parameters.
With reference to the second aspect, in a possible implementation manner, the target static image includes a plurality of vehicles to be tested; each vehicle to be tested corresponds to one target linear system; the second vanishing point determining unit includes:
the second candidate fitting straight line determining subunit is used for determining two second candidate fitting straight lines corresponding to each target straight line system; the second candidate fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target straight line system is obtained based on the edge image fitting of the corresponding vehicle to be detected; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
the pixel point calculation subunit is used for calculating the total number of second target pixel points in the edge image through which the two candidate second target fitting straight lines corresponding to each target straight line system pass;
the score calculating subunit is used for calculating the product of the total number and the vehicle type weight of the vehicle to be detected corresponding to the target linear system as the score of the target linear system, wherein the vehicle type weight of the vehicle to be detected is used for expressing the edge definition of the vehicle type of the vehicle to be detected;
and the second vanishing point determining subunit is used for determining that the two second candidate fitting straight lines corresponding to the target straight line system with the highest score are two second target fitting straight lines, and determining that the intersection point of the two second target fitting straight lines is the second vanishing point.
With reference to the second aspect, in one possible implementation manner, the movement speed determination module includes:
the target tracking unit is used for carrying out target tracking on the vehicle to be detected based on the video so as to determine a plurality of positions of the vehicle to be detected in the multi-frame static image of the video;
the displacement determining unit is used for respectively mapping the plurality of positions to a world coordinate system based on preset speed measurement parameters so as to obtain the displacement between any two positions;
and the movement speed calculation unit is used for acquiring the time difference between any two positions and calculating the movement speed of the vehicle to be measured based on the time difference and the displacement.
In a third aspect, a speed measuring device is provided, including: a processor and a memory; the memory is configured to store computer-executable instructions, and when the speed measuring device runs, the processor executes the computer-executable instructions stored by the memory, so that the speed measuring device executes the speed measuring method described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having instructions stored therein, which when executed by a processor of a velocity measurement device, enable the velocity measurement device to perform a velocity measurement method as described in the first aspect and any one of the possible implementations of the first aspect.
In the present disclosure, the names of the speed measuring devices do not limit the devices or the function modules themselves, and in practical implementation, the devices or the function modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present disclosure, they are within the scope of the claims of the present disclosure and their equivalents.
These and other aspects of the disclosure will be more readily apparent from the following description.
The technical scheme provided by the disclosure at least brings the following beneficial effects: acquiring a video shot by a collecting device on a target road, wherein the video comprises a plurality of frames of static images; under the condition that the multi-frame static image comprises the target static image, judging whether the shooting visual angle of the target static image is a preset visual angle or not; a vehicle to be detected exists in the target static image; if the shooting visual angle of the acquisition equipment is not the preset visual angle, updating the preset speed measurement parameter based on the target static image; and obtaining the movement speed of the vehicle to be tested based on the video and the preset speed measurement parameters. The preset speed measurement parameters are updated by judging whether the shooting visual angle of the target static image is the preset visual angle or not, so that the vehicle speed of the vehicle to be measured can be accurately determined under the condition that the shooting angle of the shooting equipment is detected to be switched, and the self-adaptability of the speed measurement method to the visual angle switching of the acquisition equipment is realized.
Drawings
Fig. 1 is a schematic flow chart of a speed measurement method provided by the present disclosure;
fig. 2 is another schematic flow chart of a speed measuring method provided in the present disclosure;
fig. 3 is another schematic flow chart of a speed measurement method provided by the present disclosure;
fig. 4 is another schematic flow chart of a speed measurement method provided by the present disclosure;
fig. 5 is a schematic structural diagram of a speed measuring device provided in the present disclosure;
fig. 6 is a schematic diagram of a hardware structure of a speed measuring device provided in the present disclosure.
Detailed Description
The following describes in detail a method, an apparatus and a storage medium for measuring speed provided by the embodiments of the present disclosure with reference to the accompanying drawings.
The terms "first" and "second" and the like in the specification and drawings of the present disclosure are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present disclosure, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is noted that in the embodiments of the present disclosure, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "such as" in the embodiments of the present disclosure is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion.
Hereinafter, terms related to the present application will be explained.
1. Characteristic point
In image processing, a feature point refers to a point where the image grayscale value changes drastically or a point on an image edge with a large curvature, that is, an intersection of two edges. The image feature points play a very important role in the image matching algorithm based on the feature points. The image feature points can reflect the essential features of the image and can identify the target object in the image, so that the matching of the image can be completed through the matching of the feature points.
2. Descriptor (I)
The descriptor is a simplified representation of the image, describes the detected feature points, and only contains the most important information of the image.
3. Lane line
Lane lines are traffic markings used to separate traffic flow traveling in the same direction, typically white, solid or yellow. The white dotted line is used for separating the vehicles in the same direction, and lane changing and overtaking can be realized under the safe condition. The white solid line is also separating the cars in the same direction, but the lane is not changeable. The yellow solid line is used for separating vehicles running in different directions, sometimes the vehicles running in the same direction, and can be used as a boundary line and a central line, so that the lane cannot be changed. The yellow dotted line can be used as a boundary line and also can be used as a central line, and the lane can be changed when the boundary line is made.
4. Region of interest (region of interest, ROI)
In machine vision and image processing, a region to be processed is outlined from a processed image in the form of a square, a circle, an ellipse, an irregular polygon, or the like, and is called a region of interest. The region of interest is the focus of image analysis, and the region of interest is delineated the target, can reduce processing time, increases the precision.
5. Vanishing point
Parallel straight lines under the world coordinate system converge to the same point on the perspective view, and the point is a vanishing point of the perspective view
When carrying out the vehicle speed measurement based on the video, only need on the road that tests the speed dispose the video collection system can, measure the speed based on the video of video collection system collection, consequently possess the characteristics that the cost is lower and change the disposition. When the vehicle is tested based on the video, the video recorded by the camera can be analyzed through a digital image processing technology, a video coordinate system is established, the vehicle speed can be calculated according to the time when the vehicle passes a certain distance, specifically, a speed testing area is defined in an area shot by the camera, the actual distance length corresponding to the speed testing area is calculated based on the video coordinate system and a world coordinate system, then the time when the vehicle passes the speed testing area is determined based on the video, and finally the vehicle speed of the vehicle can be determined according to the actual distance length and the time.
However, the above process requires a lot of calculation, and it takes much time to perform speed measurement by using the above method on a road with many vehicles. In order to improve the operating efficiency of the speed measurement algorithm, technicians provide parameters of a mapping model which can acquire information such as focal length, pitch angle and resolution of a camera in advance to determine the relation between image coordinates in a video and a world coordinate system, and in the actual use process, the driving speed of a vehicle can be estimated only by utilizing the mapping model to carry out mapping transformation on the world coordinate system on two-dimensional coordinates of the vehicle. Although this method improves the speed measurement efficiency to some extent, in this method, once the parameters are determined, the camera cannot move, rotate and zoom, that is, the shooting angle of view must be fixed, otherwise the parameters determined in advance are invalid. For example, for a road on which a rotatable camera is deployed, once the camera rotates, mapping transformation between image coordinates in a video shot by the camera and a world coordinate system changes, so that the mapping model fails, and in this case, when speed measurement is performed, accurate vehicle speed cannot be acquired.
In order to solve the technical problem, the application provides a speed measuring method which can improve the self-adaptability of speed measurement.
As shown in fig. 1, a flow chart of a speed measurement method provided in the embodiment of the present application is shown, where the method includes the following steps:
step S110: and acquiring a video shot by the acquisition equipment on the target road, wherein the video comprises a plurality of frames of static images.
The acquisition device may be a camera or other device with a camera function. The collecting device may be changed according to different application scenarios, for example, the collecting device for a vehicle traveling on a certain road may be a spherical camera fixedly deployed near a target road, and the collecting device may also be a portable camera device for a traffic researcher to research or perform temporary traffic control work on a certain road. The video comprises a plurality of frames of static images, and the frame rate of the video is related to the frame rate of the video, wherein the frame rate is the number of the static images played in the video format per second.
When the video shot by the acquisition equipment on the target road is obtained, the video shot by the acquisition equipment in real time can be obtained so as to realize the timely speed measurement of the vehicle, and the video shot by the acquisition equipment can also be obtained in the storage equipment so as to realize the speed measurement of the vehicle shot at a certain moment in the past.
Step S120: and under the condition that the multiple frames of static images comprise the target static image, judging whether the shooting visual angle of the target static image is a preset visual angle.
The target static image has a vehicle to be measured.
When the vehicle to be tested is required to be tested at a certain moment, the image corresponding to the moment is the target static image. For example, when the vehicle speed of a certain vehicle to be detected passes through the area shot by the acquisition device in two points 26 pm, the target still images corresponding to the two points 26 pm may be acquired from the video acquired by the acquisition device. In practical application, in order to realize real-time detection of the vehicle speed, the target static image can be a static image corresponding to the current moment in a video acquired by the device in real time.
In addition, when judging whether the shooting visual angle of the target static image is the preset visual angle, the target static image can be compared with the standard image shot by the acquisition equipment at the preset visual angle, so that whether the target static image is the preset visual angle is determined.
Step S130: and if the shooting visual angle of the acquisition equipment is not the preset visual angle, updating the preset speed measurement parameter based on the target static image.
If the shooting visual angle of the target static image is not the preset visual angle, the preset speed measurement parameter is invalid, and at the moment, the preset speed measurement parameter can be updated based on the target static image, so that the moving speed of the vehicle can be accurately obtained based on the updated preset speed measurement parameter.
The preset speed measurement parameter comprises a first vanishing point, a second vanishing point and a scaling coefficient. According to parallel straight lines under the world coordinate system, the same point is converged on the perspective view, and the point is a vanishing point of the perspective view. The first vanishing point is used for representing the intersection point of two parallel lane lines in the lane line segmentation graph, the second vanishing point is used for representing the intersection point between two parallel straight lines where the head and tail edges of the vehicle are located, and the scaling coefficient is used for representing the scaling of the size of the vehicle to be measured in the image and the actual size of the vehicle to be measured.
Step S140: and obtaining the movement speed of the vehicle to be detected based on the video and the preset speed measurement parameter.
For example, a coordinate point of the vehicle to be measured in a target still image in the video and a coordinate point of a still image in a frame before the target still image are obtained first, and a distance between two coordinate points after the two coordinate points are mapped from the image coordinates to the world coordinates, that is, a distance traveled by the vehicle to be measured between a time corresponding to the target still image and a time corresponding to the still image in the frame before the target still image, can be calculated by presetting the speed measurement parameter. And calculating the running distance and time of the vehicle to be detected to obtain the movement speed of the vehicle to be detected.
In a possible implementation manner, the speed measurement method may specifically determine whether the shooting angle of view of the target still image is a preset angle of view through the following steps S121 to S125, please refer to fig. 2:
step S121: and acquiring a standard image shot by the acquisition equipment on the target road based on a preset visual angle, and acquiring standard characteristic points and a standard descriptor of the standard image.
Step S122: and extracting image features of the target static image to obtain target feature points and a target descriptor of the target static image.
Step S123: and performing feature matching based on the target descriptor and the standard descriptor pair, the standard feature points and the target feature points.
Step S124: when matched feature points exist in the target feature and the preset feature, calculating the position offset between each matched feature point; the matched feature points comprise target feature points with the similarity smaller than the preset similarity and standard feature points.
Step S125: and when the target characteristic and the preset characteristic do not have matched characteristic points or each offset is larger than the preset offset, determining that the shooting visual angle of the target static image is not the preset visual angle.
With reference to the foregoing embodiments, in one possible implementation manner, the method further includes:
and when the matched feature points do not exist in the target feature and the preset feature or the offset is larger than the preset offset, taking the target static image as a new standard image.
With reference to the foregoing embodiments, in one possible implementation manner, the method further includes:
and when the target characteristic and the preset characteristic have matched characteristic points, acquiring the time length difference between the updating time of the standard image and the target time corresponding to the target static image.
And when the time difference is greater than the preset time difference, taking the target static image as a new standard image.
Specifically, an algorithm of FAST feature point extraction and description (ORB) may be performed on the target static image acquired by the acquisition device, so as to extract ORB feature points, establish a corresponding descriptor for each ORB feature point, and perform feature matching on the extracted feature points and descriptors of the current image and ORB feature points and descriptors of the standard image stored in the system. Judging whether two images have matched ORB feature points, wherein the positions of the matched ORB feature points on the images do not have obvious offset, if the images can be matched and do not have obvious offset, judging whether a certain time is exceeded from a last updated standard image, if so, updating the current image into the standard image, and then returning to a judgment result of 'view angle not switched'; otherwise, the standard graph is not updated, and the judgment result of 'the view angle is not switched' is directly returned. By regularly updating the standard chart under the condition that the visual angle is not switched, the natural change of a scene, such as the change of the sky color along with time and the like, can be dealt with; and a certain updating time interval is set, so that the problem that the characteristic point matching is successful all the time due to the fact that the standard graph is updated too fast is solved, and the view angle switching of the scene is difficult to recognize. The standard graph may be updated every 1 minute according to the specific requirements in the implementation process. If the matching of the feature points fails or the feature points are matched successfully but have obvious deviation, updating the current image into a standard image and returning to a judgment result of 'switching of visual angles'.
It should be understood that, when the Feature extraction is performed on the target static image, the Feature extraction may also be a Scale-Invariant Feature Transform (SIFT) Feature extraction method, harris Feature extraction method, speeded Up Robust Feature (SURF) extraction method, a Feature from compensated Segment Test (FAST) Feature extraction method, a Binary string Robust Independent basic Feature (BRIEF) extraction method, or the like.
Referring to fig. 3, in a possible implementation, the updating of the preset tachometer parameter based on the target static image includes the following steps (step S131-step S135):
step S131: acquiring a first vanishing point based on a lane line segmentation chart of the target static image; the first vanishing point is the intersection point of two first target fitting straight lines in the lane line segmentation graph; the first target fitting straight line is a straight line which passes through the first target pixel points and has the number larger than or equal to a first preset threshold value; the first target pixel point is a pixel point corresponding to the lane line in the lane line segmentation graph.
The target static image can be transmitted into a pre-trained lane line semantic segmentation model, the model can output a lane line segmentation image, the lane line segmentation image is a pair of binary images, the value of a first target pixel point in a lane line region is 1, and the value of a pixel point in a non-lane line region is 0. And fitting a linear system comprising a plurality of fitting straight lines from the lane line segmentation graph in the second step by using Hough transform. And then determining two first target fitting straight lines from a plurality of straight lines in the straight line system, wherein the first target fitting straight lines are straight lines of which the number of first target pixel points in the lane line segmentation graph is greater than a preset first preset threshold, namely the number of pixel points of which the value is 1 in the lane line segmentation graph is greater than the preset first preset threshold. It can be understood that the lane line semantic segmentation model may be a lane line network lanonet model based on deep learning, may also be other models based on deep learning, such as Mask RCNN, deep lab, and SegNet, and may also be a model based on a conventional method, such as a histogram method.
Specifically, in practical application, two first target fitting straight lines with best fitting can be selected from a straight line system, the two first target fitting straight lines with best fitting represent that the straight line passes through two first target fitting straight lines with the largest number of pixel points with the median of 1 in the lane line segmentation graph, and the intersection point of the two straight lines is obtained, and the point is the first vanishing point vp of the image 1 . Parallel straight lines under the world coordinate system converge to the same point on the perspective view, and the point is a vanishing point of the perspective view.
Step S132: determining a motion track and a vehicle type of the vehicle to be detected and an edge image of the vehicle to be detected based on a vehicle detection frame and a video of the vehicle to be detected in the target static image; and obtaining the edge image according to the region of interest corresponding to the vehicle detection frame.
The vehicle type of the vehicle to be detected in the target static image, such as a car, an SUV, a truck, a bus, etc., can be identified by calling a vehicle detection algorithm based on deep learning. A vehicle detection frame may also be identified that indicates where it is located in the image. In particular, the amount of the solvent to be used, the vehicle detection algorithm outputs a set D = { D = { D = } i I =1,2, …, n }, where each d = [ x ] } 1 ,y 1 ,x 2 ,y 2 ,label,score]Is a 6-dimensional vector, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the vehicle detection frame are represented, label represents the type of the vehicle, score represents confidence, the detection result with low confidence can be filtered by setting a confidence threshold, and n is the number of detected vehicles.
And importing the image ROI region corresponding to each vehicle detection frame into an edge detector, wherein the edge detector outputs an edge image to each imported ROI sub-image, the edge image is a binary image, the pixel value of an edge point is 1, namely a second target pixel point, and the pixel value of a non-edge point is 0.
It is understood that the vehicle detection algorithm based on the deep learning may be versions of a Single step target detection algorithm based on the deep learning (youonly Look one, YOLO), a Fast Region-based Convolutional neural Network (Fast RCNN), a Faster Region-based Convolutional neural Network (Fast RCNN), a Mask Region-based Convolutional neural Network (Mask RCNN), a Single step multi-frame target detection (SSD), etc., or may be a target algorithm based on a conventional method, such as a target detection based on a direction Gradient Histogram and a Support Vector Machine (Histogram of Oriented Gradient, supported Vector Machine, HOG + SVM), a variable component Model detection algorithm (parametric Model, part, etc.).
Step S133: obtaining a second vanishing point based on the edge image; the second vanishing point is the intersection point of two second target fitting straight lines in the edge image; the second target fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target linear system is obtained based on edge image fitting; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image.
Specifically, a target straight line system can be fitted from the edge image by using hough transform, and the target straight line system comprises a plurality of fitting straight lines. The second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image, namely the edge point with the pixel value of 1. And selecting two straight lines, which have the number of second target pixel points greater than or equal to a second preset threshold value and have an included angle with the lane line greater than a preset included angle, from the target straight line system to serve as two second target fitting straight lines, and determining the intersection point of the two second target fitting straight lines to serve as a second vanishing point.
Step S134: obtaining a scaling coefficient based on the motion track and the vehicle type of the vehicle; wherein the scaling factor is used to represent the scaling between the coordinates of the target still image and the world coordinates.
The length of the vehicle in the image can be represented by intercepting a line segment between two crossed points of the motion track of the vehicle and the detection frame, and the actual length of the intercepted line segment in a world coordinate system is calculated, namely the estimated value of the vehicle length of each vehicle. In this case, the scaling factor α =1 is set, and the scaling factor α is corrected according to the experience value of the vehicle length, which is different for different vehicle types, for example, 4.8 meters for the experience value of the vehicle length of a car or SUV and 10 meters for the experience value of the vehicle length of a truck or a bus. The corrected α is averaged to obtain the scaling factor α.
Step S135: and determining the first vanishing point, the second vanishing point and the scaling coefficient as new preset speed measurement parameters.
In one possible implementation manner, the target static image includes a plurality of vehicles to be tested; each vehicle to be tested corresponds to one target linear system; the step of determining the second vanishing point based on the edge image includes the following procedure.
Firstly, determining two second candidate fitting straight lines corresponding to each target straight line system; the second candidate fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system larger than or equal to a second preset threshold value and has an included angle with the lane line larger than a preset included angle; the target straight line system is obtained based on the edge image fitting of the corresponding vehicle to be detected; the second target pixel points are pixel points which represent the edge of the vehicle to be detected in the edge image.
And then, calculating the total number of second target pixel points in the edge image through which the two candidate second target fitting straight lines corresponding to each target straight line system pass.
And further, calculating the product of the total number and the vehicle type weight of the vehicle to be tested corresponding to the target linear system as the score of the target linear system, wherein the vehicle type weight of the vehicle to be tested is used for representing the edge definition of the vehicle type of the vehicle to be tested.
And finally, determining two second candidate fitting straight lines corresponding to the target straight line system with the highest score as two second target fitting straight lines, and determining the intersection point of the two second target fitting straight lines as a second vanishing point.
Specifically, after a target linear system is fitted from the edge image by using hough transform, based on the fitted lane line, a straight line, the included angle of which with the lane line is smaller than a certain threshold value, in the target linear system is filtered. Since, according to the definition of the second vanishing point, after the first vanishing point is determined, another set of parallel lines perpendicular to the set of parallel lines converging to the first vanishing point in the world coordinate system also converge to the same point in the perspective view, which is the second vanishing point corresponding to the first vanishing point, and the execution of the edge lines of the vehicle, some perpendicular to the lane line and some parallel to the lane line, parallel to the lane line should not be used to solve the second vanishing point.
Two second target fitting straight lines are respectively selected from each target straight line system, and the second target fitting straight lines can be straight lines which pass through the edge image and have the median value of 1 and the number of pixel points larger than a preset threshold value. Adding the number of the pixel points with the passing value of 1 of the selected second target fitting straight line, multiplying the sum by the vehicle type weight of the vehicle corresponding to the edge image, wherein the weights of different vehicle types are different, and the vehicle type with a clearer edge is endowed with a higher weight because the edge is clearThe accuracy of solving the second vanishing point of the vehicle is higher, and the empirical value in the implementation process of the scheme is as follows: the weight of the truck is 2, the bus is 1.5, and the rest is 1, which is the score of the linear system. For the sake of convenience of calculation and understanding, the numbers in the following examples are shown in a simplified manner and do not represent actual conditions in actual applications: the target still image includes a vehicle a, a vehicle B, and a vehicle C. Two second target fitting straight lines obtained based on edge images corresponding to vehicle AAnd-> And->The total number of the pixels with the pass value of 1 is 500, and the weight of the vehicle A is 5, so that the score of the target linear system corresponding to the vehicle A is 2500; two second target fitting straight lines based on the edge image corresponding to the vehicle B>And-> And->The total number of the pixel points with the pass value of 1 is 600, and the weight of the vehicle B is 4, so that the score of the target linear system corresponding to the vehicle B is 2400; two second target fitting straight lines based on the edge image corresponding to the vehicle C>And-> And->Since the total number of pixels having a pass value of 1 is 400 and the weight of the vehicle C is 8, the target linear system corresponding to the vehicle C has a score of 3200. By comparing the scores of the target linear systems corresponding to the vehicle a, the vehicle B, and the vehicle C, respectively, it can be determined that the intersection point of the second target fitting straight line in the target linear system corresponding to the vehicle C is the second vanishing point. vp 2 . Through the steps, the two selected straight lines can be ensured to come from the edge of the same vehicle, and the accuracy of the second vanishing point is ensured.
Referring to fig. 4, in an embodiment of the speed measuring method, the step of obtaining the moving speed of the vehicle to be measured based on the video and the preset speed measuring parameter includes the following steps (S141-S143):
step S141: and carrying out target tracking on the vehicle to be detected based on the video so as to determine a plurality of positions of the vehicle to be detected in the multi-frame static image of the video.
When the target tracking is carried out on the vehicle to be detected based on the video, the ROI sub-image of each vehicle detection frame is transmitted into a convolution neural network model, and the appearance characteristic vector r corresponding to the vehicle to be detected is extracted i And calculating r i And the appearance characteristic vector r of the tracked target j The minimum cosine distance d between (1) (i, j) performing a prediction function of a Kalman filter to generate a predicted value m of a motion feature vector for each tracked target j And calculating the measured value d i And the predicted value m j Mahalanobis distance d between (2) (i, j) generating a cost matrix C, wherein C i,j =λd (1) (i,j)+(1-λ)d (2) (i, j), lambda is a weighting coefficient, the optimal binary matching of the cost matrix C is calculated through Hungarian algorithm, the tracking result of each detection frame is obtained,using the current frame measurement d based on the tracking result i And updating the corresponding Kalman filter to obtain a plurality of positions of the vehicle in the multi-frame static image of the video. The positions of the vehicle to be detected in the corresponding images before the current frame, the previous frame and 50 frames of the video can be obtained, and the positions can be represented by the bottom midpoint coordinates of the vehicle detection frame, because the displacement of the bottom midpoint of the vehicle detection frame can reflect the actual displacement of the vehicle on the road most.
Step S142: and respectively mapping the plurality of positions to a world coordinate system based on preset speed measurement parameters so as to obtain the displacement between any two positions.
Step S143: and acquiring the time difference between any two positions, and calculating the movement speed of the vehicle to be measured based on the time difference and the displacement.
Specifically, by mapping the position of the vehicle to be detected in the corresponding images before the current frame, the previous frame and 50 frames, the bottom midpoint coordinate of the vehicle detection frame to the world coordinate system, it can be assumed that p (x, y) is a point on the image, and c (c) x ,c y ) Is the center point of the image, vp 1 (u x ,u y ) And vp 2 (v x ,v y ) Alpha is the scaling factor for the first and second vanishing points of the image. First, the focal length is determinedWhere · represents the vector dot product. From the relationship between the vanishing point and the focal length, the world coordinates VP of the two vanishing points can be obtained 1 (u x ,u y F) and VP 2 (v x ,v y F) from the nature of the center point of the image, the world coordinate C (C) of the center point can be obtained x ,c y 0), obtainable from knowledge of the stereogeometry, with VP 1 、VP 2 The vector Q = (VP) perpendicular to the plane formed by the three points C 1 -C)×(VP 2 -C), where X represents a vector cross product, and the found Q = (Q) x ,q y ,q z ) Defined on the basis of the third vanishing point, whose image coordinates ∑ are ∑ or>Let the desired vp 3 =(w x ,w y ) The world coordinate VP thereof can be obtained 3 (w x ,w y F) determining the unit normal vector which is connected to the image center point>Where | · | | represents the vector length. Coordinate P of setpoint P on focal plane f (x, y, f), and finally, obtaining the world coordinates of the point p through affine transformationSuppose that the middle points of the bottom of the vehicle detection frame in the images of a certain vehicle before the current frame, the previous frame and 50 frames are respectively mapped to P under the world coordinate system 0 、P 1 And P 2 The shift s of its next frame 1 =||P 1 -P 0 | |, displacement s of nearly 50 frames 2 =||P 2 -P 0 I, where i | · | | | represents the vector length. Acquiring the elapsed time t of a near frame and a near 50 frames of the vehicle 1 And t 2 Calculating the instant speed of the vehicle>And an average speed of approximately 50 frames->
It can be seen that the technical solutions provided by the embodiments of the present disclosure are introduced above mainly from the perspective of methods. In order to implement the above functions, it includes a hardware structure and/or a software module for performing each function. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The speed measuring device in the embodiment of the present disclosure may be divided into function modules according to the method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. Optionally, the division of the modules in the embodiment of the present disclosure is illustrative, and is only a logic function division, and there may be another division manner in actual implementation.
As shown in fig. 5, a speed measuring device 500 provided for the embodiment of the present disclosure includes:
the video image acquisition module 510 is configured to acquire a video taken by an acquisition device on a target road, where the video includes multiple frames of static images;
a view angle determining module 520, configured to determine whether a shooting view angle of the target still image is a preset view angle when the multiple frames of still images include the target still image; a vehicle to be detected exists in the target static image;
a preset speed measurement parameter updating module 530, configured to update a preset speed measurement parameter based on the target static image if it is determined that the shooting angle of view of the acquisition device is not the preset angle of view;
and a motion speed determining module 540, configured to obtain a motion speed of the vehicle to be detected based on the video and a preset speed measurement parameter.
Optionally, in a possible implementation manner, the viewing angle determining module includes:
the standard characteristic acquisition unit is used for acquiring a standard image shot by the acquisition equipment on the target road based on a preset visual angle, and acquiring standard characteristic points and a standard descriptor of the standard image;
the target characteristic acquisition unit is used for extracting image characteristics of the target static image to obtain target characteristic points and a target descriptor of the target static image;
the characteristic matching unit is used for carrying out characteristic matching on the basis of the target descriptor and the standard descriptor pair, the standard characteristic points and the target characteristic points;
a position offset amount calculation unit configured to calculate a position offset amount between each of matched feature points when there is a matched feature point in the target feature and the preset feature; the matched feature points comprise target feature points with the similarity smaller than the preset similarity and standard feature points;
and the visual angle judging unit is used for determining that the shooting visual angle of the target static image is not the preset visual angle when the matched characteristic points do not exist in the target characteristic and the preset characteristic or each offset is greater than the preset offset.
Optionally, in a possible implementation manner, the apparatus further includes:
and the standard image updating module is used for taking the target static image as a new standard image when the target characteristic and the preset characteristic have no matched characteristic points or the offset is greater than the preset offset.
Optionally, in a possible implementation manner, the apparatus further includes:
the time length difference determining module is used for acquiring the time length difference between the updating time of the standard image and the target time corresponding to the target static image when matched feature points exist in the target feature and the preset feature;
and the standard image updating module is also used for taking the target static image as a new standard image when the time difference is greater than the preset time difference.
Optionally, in a possible implementation manner, the preset speed measurement parameter updating module includes:
the first vanishing point determining unit is used for acquiring a first vanishing point based on a lane line segmentation graph of the target static image; the first vanishing point is the intersection point of two first target fitting straight lines in the lane line segmentation graph; the first target fitting straight line is a straight line which passes through the first target pixel points and has the number larger than or equal to a first preset threshold value; the first target pixel point is a pixel point corresponding to the lane line in the lane line segmentation graph;
the vehicle detection unit is used for determining the motion track and the vehicle type of the vehicle to be detected and the edge image of the vehicle to be detected based on the vehicle detection frame and the video of the vehicle to be detected in the target static image; the edge image is obtained according to the region of interest corresponding to the vehicle detection frame;
a second vanishing point determining unit, configured to obtain a second vanishing point based on the edge image; the second vanishing point is the intersection point of two second target fitting straight lines in the edge image; the second target fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system larger than or equal to a second preset threshold value and has an included angle with the lane line larger than a preset included angle; the target linear system is obtained based on edge image fitting; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
the scaling coefficient determining unit is used for obtaining a scaling coefficient based on the motion track and the vehicle type of the vehicle; the scaling coefficient is used for representing the scaling ratio between the coordinates of the target static image and the world coordinates;
and the preset speed measurement parameter updating unit is used for determining the first vanishing point, the second vanishing point and the scaling coefficient as new preset speed measurement parameters.
Optionally, in a possible implementation manner, the target static image includes a plurality of vehicles to be tested; each vehicle to be tested corresponds to one target linear system; the second vanishing point determining unit includes:
the second candidate fitting straight line determining subunit is used for determining two second candidate fitting straight lines corresponding to each target straight line system; the second candidate fitting straight line is a straight line which has the number of second target pixel points passing through the target straight line system larger than or equal to a second preset threshold value and has an included angle with the lane line larger than a preset included angle; the target straight line system is obtained based on the edge image fitting of the corresponding vehicle to be detected; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
the pixel point calculation subunit is used for calculating the total number of second target pixel points in the edge image, which are passed by the two candidate second target fitting straight lines corresponding to each target straight line system;
the score calculating subunit is used for calculating the product of the total number and the vehicle type weight of the vehicle to be detected corresponding to the target linear system as the score of the target linear system, wherein the vehicle type weight of the vehicle to be detected is used for expressing the edge definition of the vehicle type of the vehicle to be detected;
and the second vanishing point determining subunit is used for determining that the two second candidate fitting straight lines corresponding to the target straight line system with the highest score are two second target fitting straight lines, and determining that the intersection point of the two second target fitting straight lines is the second vanishing point.
Optionally, in a possible implementation manner, the motion speed determining module includes:
the target tracking unit is used for carrying out target tracking on the vehicle to be detected based on the video so as to determine a plurality of positions of the vehicle to be detected in the multi-frame static image of the video;
the displacement determining unit is used for respectively mapping the plurality of positions to a world coordinate system based on preset speed measurement parameters so as to obtain the displacement between any two positions;
and the movement speed calculation unit is used for acquiring the time difference between any two positions and calculating the movement speed of the vehicle to be measured based on the time difference and the displacement.
The embodiment of the disclosure provides a speed measuring device, which is used for executing a method to be executed by any device in the data integrity determination system. The speed measuring equipment can be the speed measuring equipment related in the disclosure or a module in the speed measuring device; or a chip in the speed measuring device, or other devices for executing the speed measuring method, which is not limited in this disclosure.
When the speed measuring device is implemented by hardware, a specific implementation manner of the speed measuring device in the embodiment of the present application is shown in fig. 6, and fig. 6 is a schematic structural diagram of the speed measuring device provided in the embodiment of the present disclosure, where the speed measuring device 600 includes at least one processor 601, a communication line 602, and at least one communication interface 604, and may further include a memory 603. The processor 601, the memory 603 and the communication interface 604 may be connected via a communication line 602.
The processor 601 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present disclosure, such as: one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The communication link 602 may include a path for communicating information between the aforementioned components.
The communication interface 604 is used for communicating with other devices or a communication network, and may use any transceiver or the like, such as ethernet, radio Access Network (RAN), wireless Local Area Network (WLAN), and the like.
In a possible design, the memory 603 may exist separately from the processor 601, that is, the memory 603 may be a memory external to the processor 601, in which case, the memory 603 may be connected to the processor 601 through the communication line 602, and is configured to store the execution instruction or the application program code, and is controlled by the processor 601 to execute the method for measuring speed provided by the embodiment of the present disclosure. In yet another possible design, the memory 603 may also be integrated with the processor 601, that is, the memory 603 may be an internal memory of the processor 601, for example, the memory 603 is a cache memory, and may be used for temporarily storing some data and instruction information.
As one implementation, processor 601 may include one or more CPUs, such as CPU0 and CPU1 in fig. 6. As another implementation, the speed measuring device 600 may include a plurality of processors, such as the processor 601 and the processor 607 in fig. 6. As yet another implementation, the speed measuring device 600 can further include an output device 605 and an input device 606.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the foregoing function distribution may be completed by different functional modules according to needs, that is, the internal structure of the network node is divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the module and the network node described above, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
The embodiment of the present disclosure further provides a computer-readable storage medium, in which instructions are stored, and when the computer executes the instructions, the computer executes each step in the method flow shown in the foregoing method embodiment.
Embodiments of the present disclosure provide a computer program product containing instructions, which when executed on a computer, cause the computer to execute the speed measurement method in the above method embodiments.
Embodiments of the present disclosure provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a computer program or instructions to implement the speed measurement method in the above method embodiments.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, and a hard disk. Random Access Memory (RAM), read-Only Memory (ROM), erasable Programmable Read-Only Memory (EPROM), registers, a hard disk, an optical fiber, a portable Compact disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any other form of computer-readable storage medium, in any suitable combination, or as appropriate in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the apparatus, the device, the computer-readable storage medium, and the computer program product in the embodiments of the present disclosure may be applied to the method, for technical effects that can be obtained by the apparatus, the computer-readable storage medium, and the computer program product, reference may also be made to the method embodiments, and details of the embodiments of the present disclosure are not repeated here.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.
Claims (10)
1. A method of measuring a speed, the method comprising:
acquiring a video shot by a collecting device on a target road, wherein the video comprises a plurality of frames of static images;
under the condition that the multiple frames of static images comprise a target static image, judging whether the shooting visual angle of the target static image is a preset visual angle or not; a vehicle to be detected exists in the target static image;
if the shooting visual angle of the acquisition equipment is not the preset visual angle, updating a preset speed measurement parameter based on the target static image;
and acquiring the movement speed of the vehicle to be detected based on the video and the preset speed measurement parameter.
2. The method according to claim 1, wherein the determining whether the shooting angle of view of the target still image is a preset angle of view comprises:
acquiring a standard image shot by the acquisition equipment on the target road based on a preset visual angle, and acquiring standard feature points and a standard descriptor of the standard image;
performing image feature extraction on the target static image to obtain target feature points and a target descriptor of the target static image;
performing feature matching based on the target descriptor and the standard descriptor pair, and the standard feature points and the target feature points;
when matched feature points exist in the target feature and the preset feature, calculating the position offset between each matched feature point; the matched feature points comprise target feature points with the similarity smaller than a preset similarity and standard feature points;
and when the target feature and the preset feature do not have matched feature points, or when each offset is greater than a preset offset, determining that the shooting visual angle of the target static image is not a preset visual angle.
3. The method of claim 2, further comprising:
and when the matched feature points do not exist in the target feature and the preset feature or the offset is larger than a preset offset, taking the target static image as a new standard image.
4. The method of claim 2, further comprising:
when matched feature points exist in the target feature and the preset feature, acquiring a time length difference between the updating time of the standard image and the target time corresponding to the target static image;
and when the time length difference is larger than a preset time length difference, taking the target static image as a new standard image.
5. The method according to claim 1, wherein the updating the preset tachometer parameter based on the target static image comprises:
acquiring a first vanishing point based on the lane line segmentation chart of the target static image; the first vanishing point is the intersection point of two first target fitting straight lines in the lane line segmentation graph; the first target fitting straight line is a straight line, the number of the passed first target pixel points is greater than or equal to a first preset threshold value; the first target pixel point is a pixel point corresponding to the lane line in the lane line segmentation graph;
determining a motion track and a vehicle type of the vehicle to be detected and an edge image of the vehicle to be detected based on a vehicle detection frame of the vehicle to be detected in the target static image and the video; the edge image is obtained according to the region of interest corresponding to the vehicle detection frame;
obtaining a second vanishing point based on the edge image; the second vanishing point is an intersection point of two second target fitting straight lines in the edge image; the second target fitting straight line is a straight line which has the number of second target pixel points passing through a target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target straight line system is obtained based on the edge image fitting; the second target pixel point is a pixel point which represents the edge of the vehicle to be detected in the edge image;
obtaining a scaling coefficient based on the motion trail of the vehicle and the vehicle type; wherein the scaling factor is used for representing the scaling ratio between the coordinates of the target static image and the world coordinates;
and determining the first vanishing point, the second vanishing point and the scaling coefficient as new preset speed measuring parameters.
6. The method of claim 5, wherein the target static image includes a plurality of vehicles under test; each vehicle to be tested corresponds to one target linear system; the determining a second vanishing point based on the edge image includes:
determining two second candidate fitting straight lines corresponding to each target straight line system; the second candidate fitting straight line is a straight line which has the number of second target pixel points passing through a target straight line system greater than or equal to a second preset threshold value and has an included angle with the lane line greater than a preset included angle; the target straight line system is obtained based on the edge image fitting of the corresponding vehicle to be detected; the second target pixel points are pixel points which represent the edge of the vehicle to be detected in the edge image;
calculating the total number of second target pixel points of the two candidate second target fitting straight lines corresponding to each target straight line system passing through the edge image;
calculating the product of the total number and the model weight of the vehicle to be tested corresponding to the target linear system to be the score of the target linear system, wherein the model weight of the vehicle to be tested is used for representing the edge definition of the model of the vehicle to be tested;
and determining two second candidate fitting straight lines corresponding to the target straight line system with the highest score as two second target fitting straight lines, and determining the intersection point of the two second target fitting straight lines as a second vanishing point.
7. The method according to claim 1 or 5, wherein the obtaining of the moving speed of the vehicle to be tested based on the video and the preset speed measurement parameter comprises:
performing target tracking on the vehicle to be detected based on the video to determine a plurality of positions of the vehicle to be detected in a plurality of frames of static images of the video;
respectively mapping the positions to a world coordinate system based on the preset speed measurement parameters so as to obtain the displacement between any two positions;
and acquiring the time difference between any two positions, and calculating the movement speed of the vehicle to be detected based on the time difference and the displacement.
8. A speed measuring device, the device comprising:
the video image acquisition module is used for acquiring a video shot by the acquisition equipment on a target road, wherein the video comprises a plurality of frames of static images;
the visual angle judging module is used for judging whether the shooting visual angle of the target static image is a preset visual angle or not under the condition that the multi-frame static image comprises the target static image; a vehicle to be detected exists in the target static image;
the preset speed measurement parameter updating module is used for updating a preset speed measurement parameter based on the target static image if the shooting visual angle of the acquisition equipment is judged not to be the preset visual angle;
and the movement speed determining module is used for acquiring the movement speed of the vehicle to be detected based on the video and the preset speed measuring parameters.
9. A speed measuring device, comprising: a processor and a memory; wherein the memory is used for storing computer-executable instructions, and when the speed measuring device runs, the processor executes the computer-executable instructions stored in the memory to make the speed measuring device execute the speed measuring method according to any one of claims 1 to 7.
10. A computer readable storage medium having instructions stored therein, which when executed by a processor of a speed measuring device, cause the speed measuring device to perform the speed measuring method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211456784.XA CN115984321A (en) | 2022-11-21 | 2022-11-21 | Speed measuring method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211456784.XA CN115984321A (en) | 2022-11-21 | 2022-11-21 | Speed measuring method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115984321A true CN115984321A (en) | 2023-04-18 |
Family
ID=85968863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211456784.XA Pending CN115984321A (en) | 2022-11-21 | 2022-11-21 | Speed measuring method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984321A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118644991A (en) * | 2024-08-16 | 2024-09-13 | 山东高速股份有限公司 | Road condition judging method, device, equipment, storage medium and product |
-
2022
- 2022-11-21 CN CN202211456784.XA patent/CN115984321A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118644991A (en) * | 2024-08-16 | 2024-09-13 | 山东高速股份有限公司 | Road condition judging method, device, equipment, storage medium and product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113819890B (en) | Distance measuring method, distance measuring device, electronic equipment and storage medium | |
Gurghian et al. | Deeplanes: End-to-end lane position estimation using deep neural networksa | |
US20170248971A1 (en) | Method for detecting target object, detection apparatus and robot | |
CN107463890B (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
US9336595B2 (en) | Calibration device, method for implementing calibration, and camera for movable body and storage medium with calibration function | |
CN109087510A (en) | traffic monitoring method and device | |
CN112733812A (en) | Three-dimensional lane line detection method, device and storage medium | |
CN111738071B (en) | Inverse perspective transformation method based on motion change of monocular camera | |
CN114415736B (en) | Multi-stage visual accurate landing method and device for unmanned aerial vehicle | |
CN114926726B (en) | Unmanned ship sensing method based on multitask network and related equipment | |
US11776277B2 (en) | Apparatus, method, and computer program for identifying state of object, and controller | |
CN113743385A (en) | Unmanned ship water surface target detection method and device and unmanned ship | |
CN109636828A (en) | Object tracking methods and device based on video image | |
CN111046746A (en) | License plate detection method and device | |
CN112257668A (en) | Main and auxiliary road judging method and device, electronic equipment and storage medium | |
CN115861352A (en) | Monocular vision, IMU and laser radar data fusion and edge extraction method | |
CN115331151A (en) | Video speed measuring method and device, electronic equipment and storage medium | |
CN115984321A (en) | Speed measuring method, device, equipment and storage medium | |
CN115100616A (en) | Point cloud target detection method and device, electronic equipment and storage medium | |
CN114972492A (en) | Position and pose determination method and device based on aerial view and computer storage medium | |
CN116778262B (en) | Three-dimensional target detection method and system based on virtual point cloud | |
CN112733678A (en) | Ranging method, ranging device, computer equipment and storage medium | |
CN115880648B (en) | Crowd gathering identification method and system under unmanned plane angle and application thereof | |
CN114898306B (en) | Method and device for detecting target orientation and electronic equipment | |
Yang et al. | A novel vision-based framework for real-time lane detection and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |