[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112365527A - Method and system for tracking vehicles across mirrors in park - Google Patents

Method and system for tracking vehicles across mirrors in park Download PDF

Info

Publication number
CN112365527A
CN112365527A CN202011104876.2A CN202011104876A CN112365527A CN 112365527 A CN112365527 A CN 112365527A CN 202011104876 A CN202011104876 A CN 202011104876A CN 112365527 A CN112365527 A CN 112365527A
Authority
CN
China
Prior art keywords
vehicle
target tracking
target
tracking vehicle
coordinate point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011104876.2A
Other languages
Chinese (zh)
Inventor
兰雨晴
周建飞
余丹
王丹星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbiao Huian Information Technology Co Ltd
Original Assignee
Zhongbiao Huian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbiao Huian Information Technology Co Ltd filed Critical Zhongbiao Huian Information Technology Co Ltd
Priority to CN202011104876.2A priority Critical patent/CN112365527A/en
Publication of CN112365527A publication Critical patent/CN112365527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method and a system for tracking vehicles across mirrors in a park, wherein the method comprises the following steps: step 1: detecting a surveillance video of a park by using a current camera, and extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video; step 2: carrying out single-lens tracking on a target tracking vehicle by using a current camera until the target tracking vehicle disappears from a lens of the current camera; and step 3: the single mirror tracking continues to be performed on the target tracking vehicle as it appears in the new camera and cycles in this manner until the target tracking vehicle leaves the campus. According to the method, the current camera is used for carrying out single-mirror tracking on the target tracking vehicle until the current camera disappears from the lens of the current camera, and then the new camera is used for continuously carrying out single-mirror tracking on the target tracking vehicle until the target tracking vehicle leaves the park, so that the tracking of the vehicles in the park can be realized through systematic work among a plurality of cameras.

Description

Method and system for tracking vehicles across mirrors in park
Technical Field
The invention relates to the technical field of vehicle tracking, in particular to a method and a system for tracking vehicles across mirrors in a park.
Background
The current method for tracking vehicles across mirrors in a park mainly comprises the step that front-end equipment transmits identified vehicle structured data to a central server. However, this method has a disadvantage in that each camera operates independently and cannot operate systematically.
Disclosure of Invention
The invention provides a method and a system for tracking vehicles across mirrors in a park, which are used for solving the problem that each camera cannot work systematically.
The invention provides a method for tracking vehicles across mirrors in a park, which comprises the following steps:
step 1: detecting a surveillance video of a park by using a current camera, and extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video;
step 2: carrying out single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera;
and step 3: continuing to perform single mirror tracking of the target tracking vehicle as it appears in a new camera and cycling in this manner until the target tracking vehicle leaves the campus.
Further, the method performs the following steps:
and 4, step 4: generating a historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park.
Further, the step 4: generating a historical track of the target tracking vehicle according to monitoring videos shot by a plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park, and executing the following steps:
step A1: obtaining the instantaneous speed of the target tracking vehicle according to the position coordinate point of the target tracking vehicle in each frame of image shot by the current camera by using a formula (1);
Figure BDA0002726621040000021
wherein ViRepresenting the corresponding instantaneous speed of the target tracking vehicle when the target tracking vehicle is in the ith frame image; (X)i,Yi) A coordinate point representing a position of the target-tracking vehicle in an ith frame of image; (X)i-1,Yi-1) A coordinate point representing a position of the target-tracking vehicle in an i-1 th frame of image; t represents the time for shooting one frame of image by the current camera;
step A2: obtaining a connecting line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the position coordinate point by using a formula (2);
Figure BDA0002726621040000022
wherein theta isiA tilt angle of a connecting line representing a movement of the target-tracking vehicle from a position coordinate point of an i-1 th frame image to a position coordinate point of an i-th frame image when thetaiWhen the value is more than or equal to 0, the rotation theta along the positive half shaft direction of the X axis in the anticlockwise direction is representediAngle when thetai< 0 indicates a clockwise rotation theta in the direction of the positive half axis of the X-axisiAn angle;
step A3: obtaining a position coordinate point connection equation of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the connection line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-frame image and the position coordinate point of the target tracking vehicle in the i-frame image by using a formula (3);
f(x)=tanθ(x-Xi)-Yi (3)
wherein f (x) represents a function value of an ordinate function of a line equation of position coordinate points when the target-tracking vehicle moves from the position coordinate point of the i-1 th frame image to the position coordinate point of the i-th frame image; x represents an abscissa argument of a connecting line equation of the position coordinate point of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image;
step A4: connecting the position coordinate point of the i-1 th frame image with the position coordinate point of the i-th frame image of the target tracking vehicle according to a formula (3), and marking the instantaneous speed of the position coordinate point of the i-th frame image at the position coordinate point of the i-th frame image of the target tracking vehicle;
step A5: and repeating the steps A1-A4 for each frame of image shot by all the cameras, and completing the drawing of the historical track of the target tracking vehicle.
Further, the step 1: detecting a surveillance video of a park by using a current camera, extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video, and executing the following steps:
step S11: detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park, and taking the camera which detects the target tracking vehicle as a current camera;
step S12: extracting target tracking vehicle information for the detected target tracking vehicle.
Further, the target vehicle characteristic information includes: at least one of a vehicle license plate, brand, color, vehicle type, vehicle speed.
Further, in the step S11, the target tracking vehicle is detected through fasternn, YOLO or SSD deep learning algorithm.
Further, the step 3: when the target tracking vehicle appears in a new camera, continuing to perform single-mirror tracking on the target tracking vehicle, and executing the following steps:
step S31: when a vehicle reappears in a new camera, comparing the vehicle characteristic information in the current picture shot by the new camera with the target vehicle characteristic information to determine whether the vehicle in the new camera is matched with the target tracking vehicle;
step S32: if so, continuing to implement single-mirror tracking on the target tracking vehicle by using the new camera;
step S33: and if not, repeatedly executing the step S31 and the step S32 until a vehicle matched with the target tracking vehicle appears in the new camera, and continuously performing single-mirror tracking on the target tracking vehicle by using the new camera.
The method for tracking the vehicles across the mirrors in the park, provided by the embodiment of the invention, has the following beneficial effects: the current camera is utilized to carry out single-lens tracking on the target tracking vehicle until the camera lens of the current camera disappears, and then the new camera is utilized to continue to carry out single-lens tracking on the target tracking vehicle until the target tracking vehicle leaves the park, so that the tracking of the vehicles in the park can be realized through systematic work among the cameras.
The invention also provides a mirror-crossing tracking system for vehicles in a park, which comprises:
the target tracking vehicle detection module is used for detecting a monitoring video of a park by using a current camera and extracting target vehicle characteristic information of a target tracking vehicle from the monitoring video;
the first single-lens tracking module is used for carrying out single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera;
and the circulating single-mirror tracking module is used for continuously carrying out single-mirror tracking on the target tracking vehicle when the target tracking vehicle appears in the new camera, and circulating in the mode until the target tracking vehicle leaves the park.
Further, the on-campus vehicle mirror-crossing tracking system further comprises: and the historical track generating module is used for generating the historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park.
Further, the target-tracking vehicle detection module includes:
the target tracking vehicle detection unit is used for detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park and taking the camera which detects the target tracking vehicle as a current camera;
a vehicle information extraction unit that extracts target-tracking vehicle information for the detected target-tracking vehicle.
The system for tracking the vehicles across the mirrors in the park, provided by the embodiment of the invention, has the following beneficial effects: the first single-lens tracking module performs single-lens tracking on the target tracking vehicle by using the current camera until the first single-lens tracking module disappears from the camera lens of the current camera, and the circulating single-lens tracking module continues to perform single-lens tracking on the target tracking vehicle by using the new camera until the target tracking vehicle leaves the park, so that tracking of vehicles in the park can be realized through systematic work among the cameras.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow chart illustrating a method for tracking vehicles across mirrors in a campus according to an embodiment of the present invention;
figure 2 is a block diagram of a vehicle cross-mirror tracking system on a campus in accordance with an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The embodiment of the invention provides a method for tracking vehicles across mirrors in a park, which comprises the following steps of:
step 1: detecting a surveillance video of a park by using a current camera, and extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video;
step 2: carrying out single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera;
and step 3: continuing to perform single mirror tracking of the target tracking vehicle as it appears in a new camera and cycling in this manner until the target tracking vehicle leaves the campus.
The working principle of the technical scheme is as follows: the method adopts a plurality of cameras to systematically cooperate to realize the tracking of the target tracking vehicle in the whole garden, and specifically, firstly, the current camera is used for detecting the monitoring video of the garden, and the target vehicle characteristic information of the target tracking vehicle is extracted from the monitoring video; then, the current camera is used for carrying out single-lens tracking on the target tracking vehicle until the target tracking vehicle disappears from the lens of the current camera; finally, when a target tracking vehicle is present in the new camera, single mirror tracking continues to be performed on the target tracking vehicle, and in this manner cycles until the target tracking vehicle leaves the campus.
The single-lens tracking means that the target tracking vehicle is only shot by the current camera to track the target tracking vehicle until the target tracking vehicle disappears from the lens of the current camera. The park may be a logistics park, a campus, an office, a community, etc.
The beneficial effects of the above technical scheme are: the current camera is utilized to carry out single-lens tracking on the target tracking vehicle until the camera lens of the current camera disappears, and then the new camera is utilized to continue to carry out single-lens tracking on the target tracking vehicle until the target tracking vehicle leaves the park, so that the tracking of the vehicles in the park can be realized through systematic work among the cameras.
In one embodiment, the method further performs the steps of:
and 4, step 4: generating a historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park.
The working principle of the technical scheme is as follows: and generating a historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle.
The beneficial effects of the above technical scheme are: and generating a historical track of the target tracking vehicle, so that the target tracking vehicle can be conveniently tracked in the whole process.
In one embodiment, the step 1: detecting a surveillance video of a park by using a current camera, extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video, and executing the following steps:
step S11: detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park, and taking the camera which detects the target tracking vehicle as a current camera;
step S12: extracting target tracking vehicle information for the detected target tracking vehicle.
The working principle of the technical scheme is as follows: the target vehicle characteristic information includes: at least one of a vehicle license plate, brand, color, vehicle type, vehicle speed.
In the step S11, the target tracking vehicle is detected by FasterRCNN, YOLO or SSD deep learning algorithm.
The beneficial effects of the above technical scheme are: specific steps are provided for detecting surveillance videos of a campus with a current camera and extracting target vehicle characteristic information of a target-tracked vehicle from the surveillance videos.
In one embodiment, the step 3: when the target tracking vehicle appears in a new camera, continuing to perform single-mirror tracking on the target tracking vehicle, and executing the following steps:
step S31: when a vehicle reappears in a new camera, comparing the vehicle characteristic information in the current picture shot by the new camera with the target vehicle characteristic information to determine whether the vehicle in the new camera is matched with the target tracking vehicle;
step S32: if so, continuing to implement single-mirror tracking on the target tracking vehicle by using the new camera;
step S33: and if not, repeatedly executing the step S31 and the step S32 until a vehicle matched with the target tracking vehicle appears in the new camera, and continuously performing single-mirror tracking on the target tracking vehicle by using the new camera.
The working principle of the technical scheme is as follows: when a vehicle reappears in the new camera, comparing the vehicle characteristic information in the current picture shot by the new camera with the previously detected target vehicle characteristic information, so as to determine whether the vehicle appearing in the new camera is matched with the target tracking vehicle; if the vehicle is matched with the target tracking vehicle, the vehicle appearing in the new camera is considered to be the target tracking vehicle, and the new camera is utilized to continuously carry out single-lens tracking on the target tracking vehicle; if not, the vehicle appearing in the new camera is not the target tracking vehicle, the vehicles are continuously searched in other cameras at the moment, whether the vehicles are matched with the target tracking vehicle or not is judged until the vehicles matched with the target tracking vehicle appear in the new camera, and the new camera is utilized to continuously carry out single-lens tracking on the target tracking vehicle.
The beneficial effects of the above technical scheme are: specific steps are provided for continuing to perform single mirror tracking of the target-tracking vehicle when the target-tracking vehicle is present in the new camera.
In one embodiment, the step 4: generating a historical track of the target tracking vehicle according to monitoring videos shot by a plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park, and executing the following steps:
step A1: obtaining the instantaneous speed of the target tracking vehicle according to the position coordinate point of the target tracking vehicle in each frame of image shot by the current camera by using a formula (1);
Figure BDA0002726621040000081
wherein ViRepresenting the corresponding instantaneous speed of the target tracking vehicle when the target tracking vehicle is in the ith frame image; (X)i,Yi) A coordinate point representing a position of the target-tracking vehicle in an ith frame of image; (X)i-1,Yi-1) A coordinate point representing a position of the target-tracking vehicle in an i-1 th frame of image; t represents the time for shooting one frame of image by the current camera;
step A2: obtaining a connecting line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the position coordinate point by using a formula (2);
Figure BDA0002726621040000082
wherein theta isiA tilt angle of a connecting line representing a movement of the target-tracking vehicle from a position coordinate point of an i-1 th frame image to a position coordinate point of an i-th frame image when thetaiWhen the value is more than or equal to 0, the rotation theta along the positive half shaft direction of the X axis in the anticlockwise direction is representediAngle when thetai< 0 indicates a clockwise rotation theta in the direction of the positive half axis of the X-axisiAn angle;
step A3: obtaining a position coordinate point connection equation of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the connection line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-frame image and the position coordinate point of the target tracking vehicle in the i-frame image by using a formula (3);
f(x)=tanθ(x-Xi)-Yi (3)
wherein f (x) represents a function value of an ordinate function of a line equation of position coordinate points when the target-tracking vehicle moves from the position coordinate point of the i-1 th frame image to the position coordinate point of the i-th frame image; x represents an abscissa argument of a connecting line equation of the position coordinate point of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image;
step A4: connecting the position coordinate point of the i-1 th frame image with the position coordinate point of the i-th frame image of the target tracking vehicle according to a formula (3), and marking the instantaneous speed of the position coordinate point of the i-th frame image at the position coordinate point of the i-th frame image of the target tracking vehicle;
step A5: and repeating the steps A1-A4 for each frame of image shot by all the cameras, and completing the drawing of the historical track of the target tracking vehicle.
The beneficial effects of the above technical scheme are: obtaining the instantaneous speed of the target tracking vehicle by using the formula (1) in the step A1, thereby obtaining the speed condition of each frame in the historical track of the target tracking vehicle; then, obtaining a connecting line inclination angle of the target tracking vehicle moving from the position coordinate point of the (i-1) th frame to the position coordinate point of the (i) th frame by using the formula (2) in the step A2, so as to obtain a linear equation for drawing a connecting line of the target tracking vehicle moving from the position coordinate point of the (i-1) th frame to the position coordinate point of the (i) th frame according to the inclination angle; then, obtaining a connection equation of the position coordinate point of the target tracking vehicle moving from the position coordinate point of the (i-1) th frame to the position coordinate point of the (i) th frame by using the formula (3) in the step A3, and completing the drawing of the historical track of the target tracking vehicle according to the equation and the instantaneous speed of the target tracking vehicle; therefore, the drawn track is more detailed and concrete, and the reflected historical track is more accurate and real.
As shown in fig. 2, an embodiment of the present invention provides a vehicle cross-mirror tracking system in a campus, including:
the target tracking vehicle detection module 201 is used for detecting a surveillance video of a park by using a current camera and extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video;
the first single-lens tracking module 202 is configured to perform single-lens tracking on the target tracking vehicle by using a current camera until the target tracking vehicle disappears from a lens of the current camera;
a cycle single mirror tracking module 203 for continuing single mirror tracking of the target tracking vehicle as it appears in a new camera and cycling in this manner until the target tracking vehicle leaves the campus.
The working principle of the technical scheme is as follows: the method adopts a plurality of cameras to systematically cooperate to realize the tracking of the target tracking vehicle in the whole park, and specifically, a target tracking vehicle detection module 201 detects a surveillance video of the park by using the current camera and extracts the target vehicle characteristic information of the target tracking vehicle from the surveillance video; the first single-lens tracking module 202 performs single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera; the loop single mirror tracking module 203 continues to perform single mirror tracking on the target tracking vehicle as it appears in the new camera and loops in this manner until the target tracking vehicle leaves the campus.
The beneficial effects of the above technical scheme are: the first single-lens tracking module performs single-lens tracking on the target tracking vehicle by using the current camera until the first single-lens tracking module disappears from the camera lens of the current camera, and the circulating single-lens tracking module continues to perform single-lens tracking on the target tracking vehicle by using the new camera until the target tracking vehicle leaves the park, so that tracking of vehicles in the park can be realized through systematic work among the cameras.
In one embodiment, the on-campus vehicle cross-mirror tracking system further comprises: a historical track generating module 204, configured to generate a historical track of the target-tracking vehicle according to a surveillance video captured by a plurality of cameras that perform single-mirror tracking on the target-tracking vehicle before the target-tracking vehicle leaves the campus.
The working principle of the technical scheme is as follows: the historical track generation module 204 generates a historical track of the target tracking vehicle according to the surveillance videos shot by the multiple cameras for performing single-mirror tracking on the target tracking vehicle.
The beneficial effects of the above technical scheme are: by means of the historical track generation module, the historical track of the target tracking vehicle can be generated, and the target tracking vehicle can be conveniently tracked in the whole process.
In one embodiment, the object tracking vehicle detection module 201 includes:
the target tracking vehicle detection unit is used for detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park and taking the camera which detects the target tracking vehicle as a current camera;
a vehicle information extraction unit that extracts target-tracking vehicle information for the detected target-tracking vehicle.
The working principle of the technical scheme is as follows: the target vehicle characteristic information includes: at least one of a vehicle license plate, brand, color, vehicle type, vehicle speed.
The target tracking vehicle detection unit detects the target tracking vehicle through FasterRCNN, YOLO or SSD deep learning algorithm.
The beneficial effects of the above technical scheme are: by means of the target tracking vehicle detection unit and the vehicle information extraction unit, the target tracking vehicle in the monitoring video of the park can be detected by the current camera and the characteristic information of the target vehicle can be extracted.
In one embodiment, the cyclic single mirror tracking module 203 comprises:
the vehicle characteristic information comparison unit is used for comparing the vehicle characteristic information in the current picture shot by the new camera with the target vehicle characteristic information when the vehicle reappears in the new camera so as to determine whether the vehicle in the new camera is matched with the target tracking vehicle;
the new camera single-lens tracking unit is used for continuously carrying out single-lens tracking on the target tracking vehicle by utilizing the new camera under the condition that the vehicle in the new camera is matched with the target tracking vehicle;
and the matching repeated execution unit is used for repeatedly utilizing the vehicle characteristic information comparison unit and the second single-mirror tracking unit under the condition that the vehicle in the new camera is not matched with the target tracking vehicle, finding the vehicle matched with the target tracking vehicle in the new camera, and continuously carrying out single-mirror tracking on the target tracking vehicle by utilizing the new camera.
The working principle of the technical scheme is as follows: when a vehicle reappears in the new camera, the vehicle characteristic information comparison unit compares the vehicle characteristic information in the current picture shot by the new camera with the previously detected target vehicle characteristic information so as to determine whether the vehicle appearing in the new camera is matched with the target tracking vehicle; if the vehicle is matched with the target tracking vehicle, the vehicle appearing in the new camera is considered to be the target tracking vehicle, and at the moment, the new camera single-lens tracking unit utilizes the new camera to continuously carry out single-lens tracking on the target tracking vehicle; and if not, determining that the vehicle appearing in the new camera is not the target tracking vehicle, continuing to search for the vehicle in other cameras by the matching repeated execution unit at the moment, judging whether the vehicle is matched with the target tracking vehicle or not until the vehicle matched with the target tracking vehicle appears in the new camera, and continuing to implement single-lens tracking on the target tracking vehicle by using the new camera.
The beneficial effects of the above technical scheme are: by means of the vehicle characteristic information comparison unit, the new camera single-mirror tracking unit and the matching repeated execution unit, single-mirror tracking can be continuously carried out on the target tracking vehicle when the target tracking vehicle appears in the new camera.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for transmirror tracking of vehicles on a campus, the method comprising the steps of:
step 1: detecting a surveillance video of a park by using a current camera, and extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video;
step 2: carrying out single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera;
and step 3: continuing to perform single mirror tracking of the target tracking vehicle as it appears in a new camera and cycling in this manner until the target tracking vehicle leaves the campus.
2. The method of claim 1, wherein the method further performs the steps of:
and 4, step 4: generating a historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park.
3. The method of claim 2, wherein the step 4: generating a historical track of the target tracking vehicle according to monitoring videos shot by a plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park, and executing the following steps:
step A1: obtaining the instantaneous speed of the target tracking vehicle according to the position coordinate point of the target tracking vehicle in each frame of image shot by the current camera by using a formula (1);
Figure FDA0002726621030000011
wherein ViRepresenting the corresponding instantaneous speed of the target tracking vehicle when the target tracking vehicle is in the ith frame image; (X)i,Yi) A coordinate point representing a position of the target-tracking vehicle in an ith frame of image; (X)i-1,Yi-1) Represents the aboveThe target tracks the position coordinate point of the vehicle in the i-1 th frame image; t represents the time for shooting one frame of image by the current camera;
step A2: obtaining a connecting line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the position coordinate point by using a formula (2);
Figure FDA0002726621030000021
wherein theta isiA tilt angle of a connecting line representing a movement of the target-tracking vehicle from a position coordinate point of an i-1 th frame image to a position coordinate point of an i-th frame image when thetai≧ 0 denotes counterclockwise rotation | θ along the positive axis of the X-axisiAngle when thetai< 0 indicates a clockwise rotation | θ in the direction of the positive half axis of the X-axisiAn angle;
step A3: obtaining a position coordinate point connection equation of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image according to the connection line inclination angle of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-frame image and the position coordinate point of the target tracking vehicle in the i-frame image by using a formula (3);
f(x)=tanθ(x-Xi)-Yi (3)
wherein f (x) represents a function value of an ordinate function of a line equation of position coordinate points when the target-tracking vehicle moves from the position coordinate point of the i-1 th frame image to the position coordinate point of the i-th frame image; x represents an abscissa argument of a connecting line equation of the position coordinate point of the target tracking vehicle moving from the position coordinate point of the i-1 frame image to the position coordinate point of the i-1 frame image;
step A4: connecting the position coordinate point of the i-1 th frame image with the position coordinate point of the i-th frame image of the target tracking vehicle according to a formula (3), and marking the instantaneous speed of the position coordinate point of the i-th frame image at the position coordinate point of the i-th frame image of the target tracking vehicle;
step A5: and repeating the steps A1-A4 for each frame of image shot by all the cameras, and completing the drawing of the historical track of the target tracking vehicle.
4. The method of claim 1, wherein the step 1: detecting a surveillance video of a park by using a current camera, extracting target vehicle characteristic information of a target tracking vehicle from the surveillance video, and executing the following steps:
step S11: detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park, and taking the camera which detects the target tracking vehicle as a current camera;
step S12: extracting target tracking vehicle information for the detected target tracking vehicle.
5. The method of claim 4, wherein the target vehicle characteristic information comprises: at least one of a vehicle license plate, brand, color, vehicle type, vehicle speed.
6. The method of claim 4, wherein in step S11, the target tracking vehicle is detected by FasterRCNN, YOLO, or SSD deep learning algorithm.
7. The method of claim 1, wherein step 3: when the target tracking vehicle appears in a new camera, continuing to perform single-mirror tracking on the target tracking vehicle, and executing the following steps:
step S31: when a vehicle reappears in a new camera, comparing the vehicle characteristic information in the current picture shot by the new camera with the target vehicle characteristic information to determine whether the vehicle in the new camera is matched with the target tracking vehicle;
step S32: if so, continuing to implement single-mirror tracking on the target tracking vehicle by using the new camera;
step S33: and if not, repeatedly executing the step S31 and the step S32 until a vehicle matched with the target tracking vehicle appears in the new camera, and continuously performing single-mirror tracking on the target tracking vehicle by using the new camera.
8. A vehicle on-campus mirror tracking system, comprising:
the target tracking vehicle detection module is used for detecting a monitoring video of a park by using a current camera and extracting target vehicle characteristic information of a target tracking vehicle from the monitoring video;
the first single-lens tracking module is used for carrying out single-lens tracking on the target tracking vehicle by using the current camera until the target tracking vehicle disappears from the lens of the current camera;
and the circulating single-mirror tracking module is used for continuously carrying out single-mirror tracking on the target tracking vehicle when the target tracking vehicle appears in the new camera, and circulating in the mode until the target tracking vehicle leaves the park.
9. The system of claim 8, wherein the on-campus vehicle cross-mirror tracking system further comprises: and the historical track generating module is used for generating the historical track of the target tracking vehicle according to the monitoring videos shot by the plurality of cameras for performing single-mirror tracking on the target tracking vehicle before the target tracking vehicle leaves the park.
10. The system of claim 8, wherein the target-tracking vehicle detection module comprises:
the target tracking vehicle detection unit is used for detecting a target tracking vehicle in a monitoring video shot by one of the plurality of cameras in the park and taking the camera which detects the target tracking vehicle as a current camera;
a vehicle information extraction unit that extracts target-tracking vehicle information for the detected target-tracking vehicle.
CN202011104876.2A 2020-10-15 2020-10-15 Method and system for tracking vehicles across mirrors in park Pending CN112365527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011104876.2A CN112365527A (en) 2020-10-15 2020-10-15 Method and system for tracking vehicles across mirrors in park

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011104876.2A CN112365527A (en) 2020-10-15 2020-10-15 Method and system for tracking vehicles across mirrors in park

Publications (1)

Publication Number Publication Date
CN112365527A true CN112365527A (en) 2021-02-12

Family

ID=74507174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011104876.2A Pending CN112365527A (en) 2020-10-15 2020-10-15 Method and system for tracking vehicles across mirrors in park

Country Status (1)

Country Link
CN (1) CN112365527A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870551A (en) * 2021-08-16 2021-12-31 清华大学 Roadside monitoring system capable of identifying dangerous and non-dangerous driving behaviors
CN116311107A (en) * 2023-05-25 2023-06-23 深圳市三物互联技术有限公司 Cross-camera tracking method and system based on reasoning optimization and neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185503A1 (en) * 2016-04-29 2017-11-02 高鹏 Target tracking method and apparatus
CN107529665A (en) * 2017-07-06 2018-01-02 新华三技术有限公司 Car tracing method and device
CN108417047A (en) * 2018-05-10 2018-08-17 杭州盈蝶科技有限公司 A kind of vehicle location method for tracing and its system
CN108986158A (en) * 2018-08-16 2018-12-11 新智数字科技有限公司 A kind of across the scene method for tracing identified again based on target and device and Computer Vision Platform
CN109214315A (en) * 2018-08-21 2019-01-15 北京深瞐科技有限公司 Across the camera tracking method and device of people's vehicle
CN110348332A (en) * 2019-06-24 2019-10-18 长沙理工大学 The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017185503A1 (en) * 2016-04-29 2017-11-02 高鹏 Target tracking method and apparatus
CN107529665A (en) * 2017-07-06 2018-01-02 新华三技术有限公司 Car tracing method and device
CN108417047A (en) * 2018-05-10 2018-08-17 杭州盈蝶科技有限公司 A kind of vehicle location method for tracing and its system
CN108986158A (en) * 2018-08-16 2018-12-11 新智数字科技有限公司 A kind of across the scene method for tracing identified again based on target and device and Computer Vision Platform
CN109214315A (en) * 2018-08-21 2019-01-15 北京深瞐科技有限公司 Across the camera tracking method and device of people's vehicle
CN110348332A (en) * 2019-06-24 2019-10-18 长沙理工大学 The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870551A (en) * 2021-08-16 2021-12-31 清华大学 Roadside monitoring system capable of identifying dangerous and non-dangerous driving behaviors
CN116311107A (en) * 2023-05-25 2023-06-23 深圳市三物互联技术有限公司 Cross-camera tracking method and system based on reasoning optimization and neural network
CN116311107B (en) * 2023-05-25 2023-08-04 深圳市三物互联技术有限公司 Cross-camera tracking method and system based on reasoning optimization and neural network

Similar Documents

Publication Publication Date Title
Lu et al. Retinatrack: Online single stage joint detection and tracking
Tian et al. Online multi-object tracking using joint domain information in traffic scenarios
Wang et al. Development of UAV-based target tracking and recognition systems
CN112069969B (en) Expressway monitoring video cross-mirror vehicle tracking method and system
Yoon et al. Online multi-object tracking via structural constraint event aggregation
CN109657575B (en) Intelligent video tracking algorithm for outdoor constructors
Al-Shakarji et al. Multi-object tracking cascade with multi-step data association and occlusion handling
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN102596517A (en) Control method for localization and navigation of mobile robot and mobile robot using same
CN110570456A (en) Motor vehicle track extraction method based on fusion of YOLO target detection algorithm and optical flow tracking algorithm
CN112560617B (en) Large-scene pedestrian track tracking method based on array camera
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN112365527A (en) Method and system for tracking vehicles across mirrors in park
CN113469201A (en) Image acquisition equipment offset detection method, image matching method, system and equipment
Lisanti et al. Continuous localization and mapping of a pan–tilt–zoom camera for wide area tracking
Zhang et al. Bidirectional multiple object tracking based on trajectory criteria in satellite videos
Neves et al. Acquiring high-resolution face images in outdoor environments: A master-slave calibration algorithm
Chen et al. LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
CN117670939B (en) Multi-camera multi-target tracking method and device, storage medium and electronic equipment
Bisio et al. Vehicular/Non-Vehicular Multi-Class Multi-Object Tracking in Drone-based Aerial Scenes
CN113870307B (en) Target detection method and device based on inter-frame information
CN115082526B (en) Target tracking method and device
CN115908508A (en) Coastline ship real-time tracking method based on array camera
CN114926508A (en) Method, device, equipment and storage medium for determining visual field boundary
Bagdanov et al. A reinforcement learning approach to active camera foveation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination