[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101118648A - Road conditions video camera marking method under traffic monitoring surroundings - Google Patents

Road conditions video camera marking method under traffic monitoring surroundings Download PDF

Info

Publication number
CN101118648A
CN101118648A CNA2007100228107A CN200710022810A CN101118648A CN 101118648 A CN101118648 A CN 101118648A CN A2007100228107 A CNA2007100228107 A CN A2007100228107A CN 200710022810 A CN200710022810 A CN 200710022810A CN 101118648 A CN101118648 A CN 101118648A
Authority
CN
China
Prior art keywords
camera
image
sin
coordinate system
cos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007100228107A
Other languages
Chinese (zh)
Inventor
陈启美
李勃
郭凡
董蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CNA2007100228107A priority Critical patent/CN101118648A/en
Publication of CN101118648A publication Critical patent/CN101118648A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

A road condition video camera demarcating method under the traffic monitoring condition includes the following demarcating steps: (1) visual model describing and relevant coordinate system creating: according to the requirement of the monitoring system performance, a classical Tsai transmission projection model is used for reference, and according to the feature of the road condition imaging, a new visual model is presented after conducting corresponding amendment on the classical Tsai transmission projection model, and three kinds of coordinates are established. (2) demarcating the main point of a video camera and the extension factor: the image monitoring light stream is used as the demarcating basic element; by the extension movement of the video camera, a reference frame forecast image and the difference value of the light stream field in a real time frame sampling image are used as the restriction, a constraint equation is established by adopting the least square method, the main point coordinate of the video camera and the actual magnification coefficient of the video camera can be distinguished by the powell direction family method. (3) Demarcated object selecting and parameter linearity evaluating. (4) Accuracy making on the internal and external parameters of the video camera: according to all the angle points of the monitored image and the corresponding world coordinate point, the Levenberg-Marguardt optimization algorithm is adopted to make accuracy on the video camera model parameter, and then the video camera demarcating is finished.

Description

Road condition camera calibration method under traffic monitoring environment
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a calibration method of a road condition camera under a traffic monitoring environment, which is mainly used for accurately calibrating a pan-tilt camera under the traffic monitoring environment.
Background
The method is applied to the television video technology, the communication and network technology, the mode recognition technology, the computer vision technology and other technologies, the existing traffic video monitoring technology is developed, automatic collection and analysis of traffic information flow are achieved, and the method is a research direction with wide application prospects in the technical field of intelligent traffic. The method aims to detect, position, identify and track the movement of traffic targets such as vehicles, pedestrians and the like by analyzing a road condition monitoring image sequence by using a computer vision technology, and analyze and judge the traffic behavior of the detected and tracked traffic moving targets, thereby completing the acquisition of various traffic flow data and various daily management and control related to traffic management, forming an all-directional three-dimensional digital traffic monitoring network and really realizing the intellectualization of the traffic management.
Aiming at the monitoring requirement, most road condition cameras are zooming cameras and adopt a pan-tilt shooting technology. Therefore, when the field of view of the road condition camera changes, the mapping relationship between the monitoring road section and the imaging plane also changes correspondingly, so that the current state of the camera is calibrated by analyzing the current road condition image, the corresponding relationship between the scene point and the image pixel is determined, and the influence caused by the change of the state of the camera into the calculation of parameters such as vehicle speed, vehicle type, movement distance and the like is avoided.
The camera calibration is an important step for extracting three-dimensional space information from a two-dimensional image in the field of computer vision, and an actual camera device needs to be abstracted according to the geometric imaging principle of the camera, so that a visual model of the mapping relation between a feature space and an imaging plane is provided. And then starting with the visual model, aiming at the imaging characteristics of the visual model, drawing up a suitable reference object in an image plane, analyzing and finding out related image characteristic parameters, and calibrating each unknown parameter in the visual model.
The calibration of the road condition camera set is used as an important engineering application of camera calibration, and a traditional calibration method and a camera self-calibration method are mostly adopted. The traditional calibration method is that a calibration block with a known structure is used as a spatial reference object, the constraint among camera model parameters is established by establishing a group of non-degenerate (non-coplanar) spatial points on the calibration block and the corresponding relation among image projection points of the spatial points, and then the parameters are solved through an optimization algorithm; the self-calibration method is to directly analyze and utilize a plurality of acquired image information, extract matching points representing the self-constraint of parameters in the camera and establish a virtual calibration block based on an absolute quadratic curve (curved surface), thereby calibrating the parameters of the camera.
The traditional calibration method has high calibration precision, but needs to set calibration objects on site, is difficult to set calibration points, has a complex process and large image characteristic quantity requirements, is only suitable for a small-range scene and does not meet the monitoring requirements of a large road condition scene. The self-calibration algorithm has the advantages that the view scene monitoring image information is fully utilized, but in the calibration process, the monitoring camera needs to be controlled to do rigid motion for several times, and the images under different viewpoints are accurately matched with characteristic parameters, so that the algorithm depends on the image identification and characteristic matching technology with extremely high requirements, and the robustness is poor.
In order to simplify the calibration process and aim at the specific characteristics of traffic monitoring scenes, nelson, grantham, george and the like introduce a camera calibration method of a traffic video monitoring system based on a simple imaging model, which directly utilizes a rectangular calibration target formed by road lane line corner points in a traffic scene to calibrate the focal length and orientation parameters of the camera. Similarly, the invention CN1564581a discloses a self-calibration method for calibrating the focal length and the external parameters of the space of a camera in an urban traffic monitoring environment, which uses a plurality of special straight lines on a monitored road surface in a traffic scene as calibration targets to complete the calibration of the camera. The method is simple to implement and has linear time calculation complexity, however, when the method is used for establishing a three-dimensional mapping model, partial internal parameters (principal point coordinates and distortion parameters) of the camera model need to be solidified, the application range of the calibration algorithm is reduced, and the high-precision calibration requirement of the camera is difficult to meet.
Disclosure of Invention
The invention aims to provide a novel multi-stage camera calibration method applied to a traffic monitoring environment aiming at the defects of the existing camera calibration technology.
Aiming at the monitoring requirement, establishing a high-precision camera visual imaging model, and determining the corresponding relation between an object space and an image plane; establishing an optimization model based on a camera principal point according to the light stream characteristics of scene images under different nominal amplification coefficients in a monitored scene so as to calibrate the camera principal point and an actual amplification coefficient; and selecting angular points on adjacent lane lines on the scene image as a calibration target, establishing a constraint equation containing effective focal length and camera space position parameters by using the parallel relation of the lane lines and the reference road width between the lane lines as constraint conditions, and solving internal and external parameters of the visual model.
The algorithm adopts a complex camera model, fully considers the influence caused by the change of a camera principal point and the radial distortion of a lens and meets the requirement of high-precision camera calibration; a multi-stage calibration method is adopted, the internal parameters of the camera are decomposed into fixed parameters and variable parameters, and the calibration flow is simplified; in the calibration process, complex feature matching is not needed, and the algorithm robustness is improved.
The invention is realized by the following method: the road condition camera calibration method under the traffic monitoring environment comprises the following calibration steps:
(1) Visual model description and related coordinate system establishment: aiming at the performance requirement of a monitoring system, a classical Tsai transmission projection model is continuously used, and corresponding correction is carried out on the model according to the road condition imaging characteristics, a new visual model is proposed, and three coordinate systems are established:
wherein the ground coordinate system X w -Y w -Z w And camera coordinate system X c -Y c -Z c To characterize three-dimensional space; image plane coordinate system X f - Y f To characterize the imaging plane. Establishing a world coordinate system with the origin of the camera optical axis and the groundAnd (4) plane intersection points. Y is w The shaft is directed forwards along the road surface direction, X w The axis pointing horizontally to the right, Z w The axis is normal to the ground and upward. Establishing a coordinate system of the camera, wherein an original point is the optical center position of the camera, Z c The axis being the optical axis of the camera, X c -Y c The plane is parallel to the image plane.
X c =(cos(p)cos(s)+sin(t)sin(p)sin(s))X w
+(sin(p)cos(s)-sin(t)cos(p)sin(s))Y w
Y c =(-cos(p)sin(s)+sin(t)sin(p)cos(s))X w (1)
+(-sin(p)sin(s)+sin(t)cos(p)cos(s))Y w
Z c =-cos(t)sin(p)X w +cos(t)cos(p)Y w +l
Figure A20071002281000051
Under the establishment of the visual model, the mapping relation between the world coordinate system and the image plane coordinate system is shown as the formula (3)
Figure A20071002281000052
Figure A20071002281000053
Starting from formula (3) with Z w For known parameters, an inverse mapping relationship from the image plane coordinates to the world coordinate system can be established as shown in equation (4):
Figure A20071002281000054
Figure A20071002281000055
in the formula, H is the vertical installation height of the camera, t, p and s are the pitching angle, the deflection angle and the rotation angle of the camera respectively, f is the effective focal distance of the wide angle, M (z) is the scaling coefficient, and Cx (z). Cy (z) is the image principal point coordinates. k is the first order radial distortion.
(2) Calibrating a camera principal point and a scaling coefficient: and taking the monitoring image optical flow as a calibration primitive. The camera is used for zooming, the difference value of the optical flow field between a reference frame predicted image and a real-time frame sampling image is used as constraint, a least square method is used for establishing a constraint equation, and the coordinates of the principal point of the camera and the actual amplification coefficient of the camera are identified through a powell direction family method.
(3) Selecting a calibration target and linearly solving parameters: selecting the angular points of the branch lines from the monitoring scene as calibration reference objects, and determining the slope of a vanishing line and the corresponding rotation angle of the camera by using the image plane projection of the angular points of the four adjacent branch lines; and taking the parallel relation of the lane lines and the basic road width between the lane lines as constraints, and linearly solving the effective focal length and the spatial position parameters of the camera by using the image plane coordinates of the four corner points.
(4) Camera distortion compensation: and calculating ideal pixel coordinates of all corner points on the image by using a calibrated and complete ideal camera imaging model without perspective distortion, and solving first-order radial distortion in the nonlinear visual model through a coordinate difference value of the image coordinates and the real corner point image plane projection.
(5) Refining internal and external parameters of the camera: and (3) all the angular points and corresponding world coordinate points in the monitoring image are used, the visual model parameters calculated by the algorithm are used as initial values of the optimization model, and the parameters of the camera model are refined by adopting a Levenberg-Marquardt optimization algorithm to finish the calibration of the camera.
The invention is characterized in that: the method for completing the calibration of the camera is simple to realize and can meet the high-precision calibration requirement of the camera. The method is suitable for multi-level calibration of the camera under the traffic monitoring environment.
Drawings
FIG. 1 is a camera imaging visual model established by the invention
FIG. 2 is a schematic view of a corner point calibration target aerial view selected in the present invention
FIG. 3 is a schematic diagram of vanishing point and horizon determined by the corner point calibration target
FIG. 4 is an original image of a traffic scene used in an embodiment of the present invention
FIG. 5 is a schematic diagram of a corner point calibration target selected in the embodiment of the present invention
FIG. 6 is a schematic diagram of distribution of variable parameters of a camera according to an embodiment of the present invention
Wherein FIG. 6a is a principal point coordinate distribution and FIG. 6b is a horizontal coordinate distribution
Detailed Description
(1) Visual model description and related coordinate system establishment: aiming at the performance requirements of the monitoring system, a classical Tsai transmission projection model is used, and the traffic condition imaging characteristics are corrected correspondingly, so that a new visual model is provided, as shown in FIG. 1.
Three coordinate systems are defined in the figure, wherein the ground coordinate system X w -Y w -Z w And the camera coordinate system X c -Y c -Z c To characterize three-dimensional space; image plane coordinate system X f -Y f To characterize the imaging plane. And establishing a world coordinate system, wherein the origin point of the world coordinate system is the intersection point of the optical axis of the camera and the ground. Y is w The shaft is directed forwards along the road surface direction, X w The axis pointing horizontally to the right, Z w The axis is normal to the ground and is directed upwards. Establishing a camera coordinate system with an origin as the optical center position of the camera, Z c The axis being the optical axis of the camera, X c -Y c The plane is parallel to the image plane.
The distance between the optical center of the camera and the origin of a world coordinate system is set to be 1, the pitch angle (the included angle between the optical axis of the camera and the ground plane) of the camera is set to be t, and the deflection angle (the included angle between the optical axis and the lane dividing line) is set to be p. The rotation angle is s, the influence of the gradient of the highway is neglected, and the area between parallel lines on the ground plane corresponds to the highway pavement in the view field of the camera.
Based on the defined camera spatial orientation parameters, a coordinate transformation relationship between the ground coordinate system and the camera coordinate system can be established, as shown in equation (1) above.
According to the perspective transformation principle, a coordinate mapping relationship between the two-dimensional image coordinate system and the camera coordinate system can be established, as shown in the above equation (2).
Where f is the effective focal length of the wide angle, M (z) is the zoom factor, cx (z). Cy (z) is the image principal point coordinates. k is the first order radial distortion.
Record X without loss of generality f =X f -C x (z),Y f =Y f -C y (z), f = f × M (z), and the joint type 1 and the formula 2 can establish a mapping relationship between the world coordinate system and the image plane coordinate system under an ideal perspective model, as shown in the above formula (3).
Starting from formula (3) with Z w For known parameters, an inverse mapping relationship from the image plane coordinates to the world coordinate system can be established as shown in the above equation (4):
(2) Determining a camera principal point: the expressway monitoring scene is used as a complex scene, the precise matching of the calibration targets among the monitored images is difficult to provide, and the optical flow is used as a calibration primitive. Through camera zooming or pure rotation motion, according to reference frame prediction image and real-time frame sampling image optical flow field, using least square method to identify camera internal parameter
Let I r [x 1 ]Reference frame grayscale image, I f [x 2 ]A sampled gray-scale image obtained for changing the camera portion parameter p. Since image scaling and pure rotation do not introduce new scenes, a transformation G can be introduced to achieve pixel coordinates x 1 ,x 2 Inter mapping, as shown in equation (5)
Figure A20071002281000071
Consider being in the same placeIn the scene, fixing other parameters of the camera, shooting two scene images by different nominal amplification coefficients without loss of generality and setting a reference frame as a wide-angle image I 0 (nominal magnification = 0), sample frame I n (nominal amplification factor n). The formula (2) can be used to obtain a coordinate mapping formula H between the reference frame and the sampling frame as shown in formula (6).
X′ f =M(n)[X f -C x (n)]+C x (n)
(6)
Y′ f =M(n)[Y f -C y (n)]+C y (n)
Based on the coordinate corresponding relationship, the expected image I based on the gray value of the reference frame can be written w [x]
I w [x]=I[H(x,C x (n),C y (n),M(n))]
Therefore, for the sampled image, a standard deviation function based on the expected image and the sampled image can be obtained, as shown in equation (7)
Figure A20071002281000072
Figure A20071002281000073
Where V is a subset of the sampled frame image coordinates, preventing coordinate overflow that may occur in the coordinate transformation H (x, p).
Minimizing E (p) in equation (2) is a camera parameter optimization problem, and the initial values of the optimization model may be set to M (n) = M (n-1), C x (n)=C x (n-1),C y (n)=C y (n-1) with a base value of M (0) =1, (C) x (0),C y (0) ) is the image center coordinate. Because the optimization model has more sampling points, in order to ensure convergence, the algorithm is realized by using a powell direction family method.
(3) And (3) calibration target selection and parameter linear solution: under the condition that the visual angle and the focal length of the camera are unknown, if parameter calibration is carried out, a reference object must be drawn, and a highway lane line is strictly drawn, so that the reference object can be used as the reference object to establish the corresponding relation between the camera non-calibrated parameters and the image characteristic parameters. Fig. 2 shows a parallelogram calibration module based on corner points of lane lines selected on a monitored road section.
Horizon and vanishing point calculation
According to the perspective projection principle, the projections of a plurality of parallel straight lines which are not overlapped with each other on the ground on an image plane have the same vanishing point and different slopes. The vanishing point and the horizon determined by two parallel straight lines determined by the four corner points are shown in figure (3).
From a straight line x a x d ,x b x c The determined vanishing point is noted as x 0 (u 0 ,v 0 ) From a straight line x a x b ,x d x c The determined vanishing point is noted as x 1 (u 1 ,v 1 ) The coordinate is shown as formula (8):
Figure A20071002281000074
Figure A20071002281000075
Figure A20071002281000076
Figure A20071002281000077
the tangent value of the slope, i.e. the rotation angle, of the fault horizon is
Figure A20071002281000078
According to the obtained rotation angle, recording
Figure A20071002281000081
Figure A20071002281000082
The mapping equation (Z) of the image plane and the ground plane coordinate can be obtained w =0)
Figure A20071002281000083
Figure A20071002281000084
According to the parallel corresponding relation between the angular points, the following equation is provided:
Y A -Y B =Y C -Y D
(10)
X A -X C =X B -X D
the relative normative of the highway and the road width are generally constant values, so that the method has an equation
X D -X C =w (11)
For convenience of notation, u and v represent
Figure A20071002281000085
The joint vertical type (10) and (11) can solve the unknown camera parameters p, f, s, f, l, as shown in the formula (12)
Figure A20071002281000086
f=v 0 /tan(t)
Figure A20071002281000087
Figure A20071002281000088
(4) Camera distortion compensation: considering that there is first-order radial distortion in the nonlinear camera model, the following formula (2) is rewritten:
X d =X f +(X f -C x (z))*kr 2
(13)
Y d =Y f +(Y f -C y (z))*kr 2
since the perfect ideal imaging model of the camera without perspective distortion is calibrated, the ideal pixel coordinate (X) f ,Y f ) It can be seen that rewritable formula (13) is in the form of equation, as shown in formula (12):
Figure A20071002281000089
given n corner points in an image, 2n equations can be obtained in total and expressed in a matrix form
Dk=d
The least squares solution is shown in formula (15)
k=(D T D) -1 D T d (15)
(5) Refining internal and external parameters of the camera: in order to obtain accurate camera model parameters, all extracted image corner points and corresponding world coordinate points need to be considered, an optimization model shown as a formula (14) is established, and all internal and external parameters are refined.
Figure A20071002281000091
n: number of angular points on scene image
W i : actual coordinates of ith corner point on scene image
Figure A20071002281000092
Actual image coordinates of ith corner point
Figure A20071002281000093
From W i Solving the coordinates of the mapping image by substituting the actual imaging model
The initial values of the optimization model are the parameter values of the camera model and the initial distortion coefficient which are obtained by using the algorithm in the section, and the model parameters can be solved by adopting a Levenberg-Marquardt optimization algorithm.
(1) Setting the range of the nominal amplification factor of the monitoring camera to be 0-k, fixing other parameters of the camera aiming at the same scene, gradually increasing the nominal amplification factor, shooting k/m scene images by using a step length m, taking the center of the image as a main point initial value, and iteratively solving the main point of the camera and the actual amplification factor of the image under different nominal amplification factors by using a powell direction family method by using an equation (6) and an equation (7);
(2) Detecting angular points of a scene image, and calculating internal fixed parameters and external space orientation parameters of the camera by using four adjacent angular points near main points of the image and by using formulas (9) and (12); the first order radial distortion of the camera is calculated using equation (15) using all the corner points on the image.
(3) And substituting the detected image corner points and the corresponding world coordinate points into an equation (16), solving a nonlinear minimization model, and solving all parameters of the refined camera.
TABLE 1 object image conversion accuracy
Minimum value Maximum value of Variance (variance)
Image plane u -0.813 0.704 0.4331
v -0.544 0.427 0.4009
World seat x -0.585 0.322 0.2330
y -0.233 0.212 0.1331
In order to verify the effectiveness of the method provided by the invention, an embodiment of the invention adopts the highway traffic scene image shown in fig. 4, selects a lane line corner point in the traffic scene image as a calibration target, and takes four corner points near a main point of the image as main calibration features, which are marked as A, B, C, D, as shown in fig. 5. The distance between lane lines is known in advance.
Fig. 6, including 6a and 6b, shows the actual distribution of the principal point and the magnification factor of the camera at different nominal focal lengths, and it is clear that the coordinates of the principal point of the image increase in a quasi-straight line with a displacement range of ± 30 pixels from the center of the image as the nominal magnification factor increases. The actual amplification factor is a quasi-parabolic increasing process. And taking the angular point of the lane line as a test sample point, utilizing the calibration result of the camera, and comparing the positive and negative conversion of the object image space of the sample point with the standard value of the sample point to serve as an evaluation scale. Table 1 shows the accuracy of the difference between the sample point image conversion and the standard value. Experimental results show that the grading calibration method provided by the invention has high object-image conversion precision and can completely meet the precision requirement of traffic monitoring.

Claims (2)

1. A road condition camera calibration method under a traffic monitoring environment is characterized by comprising the following calibration steps:
(1) Visual model description and related coordinate system establishment: aiming at the performance requirement of a monitoring system, a classical Tsai transmission projection model is continuously used, and corresponding correction is carried out on the model according to the road condition imaging characteristics, a new visual model is proposed, and three coordinate systems are established:
wherein the ground coordinate system X w -Y w -Z w And the camera coordinate system X c -Y c -Z c To characterize three-dimensional space; image plane coordinate system X f -Y f To characterize the imaging plane. And establishing a world coordinate system, wherein the origin point of the world coordinate system is the intersection point of the optical axis of the camera and the ground. Y is w The axis being directed forwardly in the direction of the road surface, X w The axis pointing horizontally to the right, Z w The axis is normal to the ground and is upward. Establishing a camera coordinate system with an origin as the optical center position of the camera, Z c The axis being the optical axis of the camera, X c -Y c The plane is parallel to the image plane;
X c =(cos(p)cos(s)+sin(t)sin(p)sin(s))X w
+(sin(p)cos(s)-sin(t)cos(p)sin(s))Y w
Y c =(-cos(p)sin(s)+sin(t)sin(p)cos(s))X w (1)
+(-sin(p)sin(s)+sin(t)cos(p)cos(s))Y w
Z c =-cos(t)sin(p)X w +cos(t)cos(p)Y w +l
Figure A2007100228100002C1
under the establishment of the visual model, the mapping relation between the world coordinate system and the image plane coordinate system is shown as the formula (3)
Figure A2007100228100002C2
Figure A2007100228100002C3
Starting from formula (3) with Z w For known parameters, an inverse mapping relationship from the image plane coordinates to the world coordinate system can be established as shown in equation (4):
Figure A2007100228100002C4
Figure A2007100228100002C5
h is the vertical installation height of the camera, t, p and s are the pitch angle, the deflection angle and the rotation angle of the camera respectively, f is the wide-angle effective focal distance, M (z) is a scaling coefficient, cx (z) and Cy (z) are image principal point coordinates, and k is first-order radial distortion.
(2) Calibrating a camera principal point and a scaling coefficient: taking monitoring image optical flow as a calibration primitive; the camera is used for zooming, the difference value of the optical flow field between a reference frame predicted image and a real-time frame sampling image is used as constraint, a least square method is used for establishing a constraint equation, and the coordinates of the principal point of the camera and the actual amplification coefficient of the camera are identified through a powell direction family method.
(3) Selecting a calibration target and linearly solving parameters: selecting a lane line angular point in a monitoring scene as a calibration reference object, and determining a vanishing line slope and a corresponding camera rotation angle by using image plane projection of four adjacent lane line angular points; taking the parallel relation of the lane lines and the basic road width between the lane lines as constraints, and linearly solving the effective focal length and the spatial position parameters of the camera by using the image plane coordinates of the four corner points;
(4) Refining internal and external parameters of the camera: and (3) all the angular points in the monitoring image and the corresponding world coordinate points are used, the vision model parameters calculated by the algorithm are used as initial values of the optimization model, and the Levenberg-Marquardt optimization algorithm is adopted to refine the camera model parameters to finish the camera calibration.
2. The method for calibrating a road condition camera under traffic surveillance environment as claimed in claim 1, wherein the camera distortion compensation is adopted: and calculating ideal pixel coordinates of all corner points on the image by using a calibrated and complete ideal camera imaging model without perspective distortion, and solving first-order radial distortion in the nonlinear visual model through a coordinate difference value of the ideal pixel coordinates and the real corner point image plane projection.
CNA2007100228107A 2007-05-22 2007-05-22 Road conditions video camera marking method under traffic monitoring surroundings Pending CN101118648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007100228107A CN101118648A (en) 2007-05-22 2007-05-22 Road conditions video camera marking method under traffic monitoring surroundings

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007100228107A CN101118648A (en) 2007-05-22 2007-05-22 Road conditions video camera marking method under traffic monitoring surroundings

Publications (1)

Publication Number Publication Date
CN101118648A true CN101118648A (en) 2008-02-06

Family

ID=39054746

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007100228107A Pending CN101118648A (en) 2007-05-22 2007-05-22 Road conditions video camera marking method under traffic monitoring surroundings

Country Status (1)

Country Link
CN (1) CN101118648A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667288A (en) * 2008-09-02 2010-03-10 新奥特(北京)视频技术有限公司 Method for detecting corner points of communicated regions in binary symbol images
CN102013099A (en) * 2010-11-26 2011-04-13 中国人民解放军国防科学技术大学 Interactive calibration method for external parameters of vehicle video camera
CN102075785A (en) * 2010-12-28 2011-05-25 武汉大学 Method for correcting wide-angle camera lens distortion of automatic teller machine (ATM)
CN102175613A (en) * 2011-01-26 2011-09-07 南京大学 Image-brightness-characteristic-based pan/tilt/zoom (PTZ) video visibility detection method
CN102222332A (en) * 2011-05-19 2011-10-19 长安大学 Geometric calibration method of camera under linear model
CN101425181B (en) * 2008-12-15 2012-05-09 浙江大学 Panoramic view vision auxiliary parking system demarcating method
CN102622747A (en) * 2012-02-16 2012-08-01 北京航空航天大学 Camera parameter optimization method for vision measurement
CN102104791B (en) * 2009-12-17 2012-11-21 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof
CN102800084A (en) * 2012-06-20 2012-11-28 东南大学 Method for measuring image principal point coordinates and distortion coefficient of linear target
CN103630496A (en) * 2013-12-12 2014-03-12 南京大学 Traffic video visibility detecting method based on road surface brightness and least square approach
CN103729837A (en) * 2013-06-25 2014-04-16 长沙理工大学 Rapid calibration method of single road condition video camera
CN103747207A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Positioning and tracking method based on video monitor network
CN103985250A (en) * 2014-04-04 2014-08-13 浙江工业大学 Light-weight holographic road traffic state visual inspection device
CN104182933A (en) * 2013-05-28 2014-12-03 东北大学 Wide-angle lens image distortion correcting method based on reverse division model
CN107481291A (en) * 2017-08-16 2017-12-15 长安大学 Traffic monitoring model calibration method and system based on mark dotted line physical coordinates
CN107505344A (en) * 2017-07-25 2017-12-22 中国海洋石油总公司 The lithologic interpretation method of " least square product " method of utilization
CN108090933A (en) * 2016-11-22 2018-05-29 腾讯科技(深圳)有限公司 Two dimensional surface scaling method and device
CN109345595A (en) * 2018-09-14 2019-02-15 北京航空航天大学 A kind of stereo visual sensor calibration method based on ball lens
CN109448376A (en) * 2018-11-27 2019-03-08 科大国创软件股份有限公司 A kind of road condition analyzing method and system based on video
CN110349219A (en) * 2018-04-04 2019-10-18 杭州海康威视数字技术股份有限公司 A kind of Camera extrinsic scaling method and device
CN111462249A (en) * 2020-04-02 2020-07-28 北京迈格威科技有限公司 Calibration data acquisition method, calibration method and device for traffic camera
CN111508027A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111524182A (en) * 2020-04-29 2020-08-11 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN112106110A (en) * 2018-04-27 2020-12-18 上海趋视信息科技有限公司 System and method for calibrating camera
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112798811A (en) * 2020-12-30 2021-05-14 杭州海康威视数字技术股份有限公司 Speed measurement method, device and equipment
CN112950726A (en) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 Camera orientation calibration method and related product
CN113223096A (en) * 2021-06-09 2021-08-06 司法鉴定科学研究院 Rapid investigation method and system for slight traffic accident based on scene image
CN113223276A (en) * 2021-03-25 2021-08-06 桂林电子科技大学 Pedestrian hurdling behavior alarm method and device based on video identification
CN113380035A (en) * 2021-06-16 2021-09-10 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667288A (en) * 2008-09-02 2010-03-10 新奥特(北京)视频技术有限公司 Method for detecting corner points of communicated regions in binary symbol images
CN101667288B (en) * 2008-09-02 2012-11-14 新奥特(北京)视频技术有限公司 Method for detecting corner points of communicated regions in binary symbol images
CN101425181B (en) * 2008-12-15 2012-05-09 浙江大学 Panoramic view vision auxiliary parking system demarcating method
CN102104791B (en) * 2009-12-17 2012-11-21 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof
CN102013099B (en) * 2010-11-26 2012-07-04 中国人民解放军国防科学技术大学 Interactive calibration method for external parameters of vehicle video camera
CN102013099A (en) * 2010-11-26 2011-04-13 中国人民解放军国防科学技术大学 Interactive calibration method for external parameters of vehicle video camera
CN102075785A (en) * 2010-12-28 2011-05-25 武汉大学 Method for correcting wide-angle camera lens distortion of automatic teller machine (ATM)
CN102075785B (en) * 2010-12-28 2012-05-23 武汉大学 Method for correcting wide-angle camera lens distortion of ATM (automatic Teller machine)
CN102175613B (en) * 2011-01-26 2012-11-14 南京大学 Image-brightness-characteristic-based pan/tilt/zoom (PTZ) video visibility detection method
CN102175613A (en) * 2011-01-26 2011-09-07 南京大学 Image-brightness-characteristic-based pan/tilt/zoom (PTZ) video visibility detection method
CN102222332A (en) * 2011-05-19 2011-10-19 长安大学 Geometric calibration method of camera under linear model
CN102622747B (en) * 2012-02-16 2013-10-16 北京航空航天大学 Camera parameter optimization method for vision measurement
CN102622747A (en) * 2012-02-16 2012-08-01 北京航空航天大学 Camera parameter optimization method for vision measurement
CN102800084B (en) * 2012-06-20 2014-10-29 东南大学 Method for measuring image principal point coordinates and distortion coefficient of linear target
CN102800084A (en) * 2012-06-20 2012-11-28 东南大学 Method for measuring image principal point coordinates and distortion coefficient of linear target
CN104182933A (en) * 2013-05-28 2014-12-03 东北大学 Wide-angle lens image distortion correcting method based on reverse division model
CN103729837A (en) * 2013-06-25 2014-04-16 长沙理工大学 Rapid calibration method of single road condition video camera
CN103747207A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Positioning and tracking method based on video monitor network
CN103630496A (en) * 2013-12-12 2014-03-12 南京大学 Traffic video visibility detecting method based on road surface brightness and least square approach
CN103985250A (en) * 2014-04-04 2014-08-13 浙江工业大学 Light-weight holographic road traffic state visual inspection device
CN103985250B (en) * 2014-04-04 2016-05-18 浙江工业大学 The holographic road traffic state vision inspection apparatus of lightweight
CN108090933A (en) * 2016-11-22 2018-05-29 腾讯科技(深圳)有限公司 Two dimensional surface scaling method and device
CN107505344A (en) * 2017-07-25 2017-12-22 中国海洋石油总公司 The lithologic interpretation method of " least square product " method of utilization
CN107481291B (en) * 2017-08-16 2020-04-03 长安大学 Traffic monitoring model calibration method and system based on physical coordinates of marked dotted lines
CN107481291A (en) * 2017-08-16 2017-12-15 长安大学 Traffic monitoring model calibration method and system based on mark dotted line physical coordinates
CN110349219A (en) * 2018-04-04 2019-10-18 杭州海康威视数字技术股份有限公司 A kind of Camera extrinsic scaling method and device
CN112106110A (en) * 2018-04-27 2020-12-18 上海趋视信息科技有限公司 System and method for calibrating camera
US11468598B2 (en) 2018-04-27 2022-10-11 Shanghai Truthvision Information Technology Co., Ltd. System and method for camera calibration
CN109345595B (en) * 2018-09-14 2022-02-11 北京航空航天大学 Stereoscopic vision sensor calibration method based on spherical lens
CN109345595A (en) * 2018-09-14 2019-02-15 北京航空航天大学 A kind of stereo visual sensor calibration method based on ball lens
CN109448376A (en) * 2018-11-27 2019-03-08 科大国创软件股份有限公司 A kind of road condition analyzing method and system based on video
CN111508027A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111508027B (en) * 2019-01-31 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111462249A (en) * 2020-04-02 2020-07-28 北京迈格威科技有限公司 Calibration data acquisition method, calibration method and device for traffic camera
CN111524182A (en) * 2020-04-29 2020-08-11 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN111524182B (en) * 2020-04-29 2023-11-10 杭州电子科技大学 Mathematical modeling method based on visual information analysis
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
CN112798811A (en) * 2020-12-30 2021-05-14 杭州海康威视数字技术股份有限公司 Speed measurement method, device and equipment
CN112798811B (en) * 2020-12-30 2023-07-28 杭州海康威视数字技术股份有限公司 Speed measurement method, device and equipment
CN112950726A (en) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 Camera orientation calibration method and related product
CN113223276A (en) * 2021-03-25 2021-08-06 桂林电子科技大学 Pedestrian hurdling behavior alarm method and device based on video identification
CN113223096A (en) * 2021-06-09 2021-08-06 司法鉴定科学研究院 Rapid investigation method and system for slight traffic accident based on scene image
CN113223096B (en) * 2021-06-09 2022-08-30 司法鉴定科学研究院 Rapid investigation method and system for slight traffic accident based on scene image
CN113380035A (en) * 2021-06-16 2021-09-10 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system

Similar Documents

Publication Publication Date Title
CN101118648A (en) Road conditions video camera marking method under traffic monitoring surroundings
CN110660023B (en) Video stitching method based on image semantic segmentation
CN102867414B (en) Vehicle queue length measurement method based on PTZ (Pan/Tilt/Zoom) camera fast calibration
Goldbeck et al. Lane detection and tracking by video sensors
CN107229908B (en) A kind of method for detecting lane lines
CN101894366B (en) Method and device for acquiring calibration parameters and video monitoring system
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN108648241A (en) A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN113223075A (en) Ship height measuring system and method based on binocular camera
CN112017238A (en) Method and device for determining spatial position information of linear object
CN113221883B (en) Unmanned aerial vehicle flight navigation route real-time correction method
CN116883610A (en) Digital twin intersection construction method and system based on vehicle identification and track mapping
CN115166722B (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN108362205A (en) Space ranging method based on fringe projection
Li et al. Panoramic image mosaic technology based on sift algorithm in power monitoring
CN110969576B (en) Highway pavement image splicing method based on roadside PTZ camera
CN103260008A (en) Projection converting method from image position to actual position
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
Laureshyn et al. Automated video analysis as a tool for analysing road user behaviour
CN116973857A (en) Radar and vision detection online joint calibration method
CN116912517A (en) Method and device for detecting camera view field boundary
CN115719442A (en) Intersection target fusion method and system based on homography transformation matrix
Wang et al. Vehicle Micro-Trajectory Automatic Acquisition Method Based on Multi-Sensor Fusion
CN115249345A (en) Traffic jam detection method based on oblique photography three-dimensional live-action map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080206