CN110807804B - Method, apparatus, device and readable storage medium for target tracking - Google Patents
Method, apparatus, device and readable storage medium for target tracking Download PDFInfo
- Publication number
- CN110807804B CN110807804B CN201911065424.5A CN201911065424A CN110807804B CN 110807804 B CN110807804 B CN 110807804B CN 201911065424 A CN201911065424 A CN 201911065424A CN 110807804 B CN110807804 B CN 110807804B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- imaging device
- determining
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The present disclosure provides a method, apparatus, device and readable storage medium for target tracking. The method for target tracking includes: obtaining a first image acquired by a first imaging device and a second image acquired by a second imaging device, and determining a coincidence region calibration line of the first image and the second image, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image; determining a relative position of an object included in the first image and/or the second image with respect to the coincidence region scaling line; determining whether the target belongs to a coincident target included in the first image and the second image based on the relative position; and determining the imaging equipment for tracking the coincident target based on the movement direction of the coincident target under the condition of belonging to the coincident target.
Description
Technical Field
The present disclosure relates to the field of vehicle-road coordination, and in particular, to a method, apparatus, device, and readable storage medium for target tracking.
Background
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. Along with research and progress of artificial intelligence technology, the artificial intelligence technology is developed, researched and applied in a plurality of fields, and the vehicle-road cooperative technology is an important direction in the artificial intelligence field and has a wide application prospect.
In the field of machine vision-based vehicle-road coordination, it is necessary to continuously arrange imaging devices at certain intervals on both sides of a road for capturing motion information of objects such as vehicles, pedestrians, etc. on the road, to achieve tracking of the objects on the road, and to intelligently make driving decisions based on the captured information. For two adjacent imaging devices, there is an imaging overlapping area, and it is necessary to distinguish objects near the overlapping area to determine the imaging device for tracking the objects, so as to avoid resource waste and information interference caused by simultaneous tracking by the two imaging devices.
Disclosure of Invention
The present disclosure provides a method, apparatus, device and readable storage medium for object tracking for determining an imaging device that tracks a coincident object in a first image and a second image.
According to an aspect of the present disclosure, there is provided a method for target tracking, comprising: obtaining a first image acquired by a first imaging device and a second image acquired by a second imaging device, and determining a coincidence region calibration line of the first image and the second image, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image; determining a relative position of an object included in the first image and/or the second image with respect to the coincidence region scaling line; determining whether the target belongs to a coincident target included in the first image and the second image based on the relative position; and determining the imaging equipment for tracking the coincident target based on the movement direction of the coincident target under the condition of belonging to the coincident target.
According to some embodiments of the disclosure, the determining the coincidence region scaling line of the first image and the second image includes: and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
According to some embodiments of the present disclosure, the determining a relative position of the object included in the first image and/or the second image with respect to the coincidence region scaling line includes: determining the positioning coordinates of the target; and determining the relative position of the target relative to the calibration dividing line based on the positioning coordinates of the calibration dividing line and the positioning coordinates of the target, and taking the relative position of the target relative to the coincidence region calibration line as the relative position of the target.
According to some embodiments of the disclosure, the determining the coincidence region scaling line of the first image and the second image includes: and determining pixel coordinates of a calibration dividing line based on pixel coordinates of a marker positioned between the first imaging device and the second imaging device in the first image and the second image, wherein the calibration dividing line is used as the overlapping region calibration line and represents the middle boundary line of the overlapping region.
According to some embodiments of the present disclosure, the determining a relative position of the object included in the first image and/or the second image with respect to the coincidence region scaling line includes: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; and determining the relative position of the target relative to the calibration dividing line based on the pixel coordinates of the calibration dividing line and the first and second pixel coordinates of the target as the relative position of the target relative to the coincidence region calibration line.
According to some embodiments of the disclosure, the relative positions include: the target is close to the first imaging device relative to the overlapping region calibration line; the target is close to the second imaging device relative to the coincident region calibration line; and the target is located on the overlapping region calibration line, wherein determining whether the target belongs to an overlapping target included in the first image and the second image based on the relative position includes: under the condition that the target is positioned on the overlapping area calibration line, determining that the target belongs to an overlapping target; determining an imaging device that tracks the coincident target comprises: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the condition that the movement direction of the coincident target faces the second imaging device.
According to some embodiments of the disclosure, the method further comprises: determining the first imaging device as an imaging device that tracks the target if the target is located proximate to the first imaging device; the second imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the second imaging device.
According to some embodiments of the disclosure, the determining the coincidence region scaling line of the first image and the second image includes: and determining a first calibration line in the first image and determining a second calibration line in the second image, wherein the first calibration line and the second calibration line are used as the overlapping region calibration line, the first calibration line represents the boundary line of the overlapping region in the first image, and the second calibration line represents the boundary line of the overlapping region in the second image.
According to some embodiments of the present disclosure, the determining a relative position of the object included in the first image and/or the second image with respect to the coincidence region scaling line includes: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; determining a relative position of the object with respect to the first and second calibration lines as a relative position with respect to the overlap region calibration line based on the pixel coordinates of the first calibration line, the pixel coordinates of the second calibration line, and the first and second pixel coordinates of the object, wherein the relative position comprises: the target is located outside the overlapping area and is close to the first imaging device; the target is located outside the overlapping region and is close to the second imaging device; and the target is located within the coincident region.
According to some embodiments of the disclosure, the determining whether the object belongs to a coincident object included in the first image and the second image includes: determining that the object belongs to a coincident object under the condition that the object is positioned in the coincident region; determining an imaging device that tracks the coincident target comprises: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the case that the movement direction of the coincident target is toward the second imaging device.
According to some embodiments of the disclosure, the method further comprises: determining the first imaging device as an imaging device for tracking the target in the case that the target is located outside the overlapping area and is close to the first imaging device; in the case where the target is located outside the overlapping region and is close to the second imaging device, the second imaging device is determined as an imaging device that tracks the target.
According to some embodiments of the disclosure, the method further comprises: and under the condition of belonging to the coincidence target, carrying out image fusion on the coincidence target based on the first image and the second image.
According to another aspect of the present disclosure, there is also provided an apparatus for target tracking, including: an acquisition unit configured to acquire a first image acquired by a first imaging device and a second image acquired by a second imaging device, and determine a coincidence region calibration line of the first image and the second image, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image; a position unit configured to determine a relative position of a target included in the first image and/or the second image with respect to the overlapping region criterion line; a coincidence target determining unit configured to determine whether the target belongs to a coincidence target included in the first image and the second image based on the relative position; and a tracking unit configured to determine an imaging apparatus that tracks the coincident target based on a moving direction of the coincident target in a case where the coincident target belongs to the imaging apparatus.
According to some embodiments of the disclosure, the acquisition unit is configured to: and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
According to some embodiments of the disclosure, the location unit is configured to: determining the positioning coordinates of the target; and determining the relative position of the target relative to the calibration dividing line based on the positioning coordinates of the calibration dividing line and the positioning coordinates of the target, and taking the relative position of the target relative to the coincidence region calibration line as the relative position of the target.
According to some embodiments of the disclosure, the acquisition unit is configured to: and determining pixel coordinates of a calibration dividing line based on pixel coordinates of a marker positioned between the first imaging device and the second imaging device in the first image and the second image, wherein the calibration dividing line is used as the overlapping region calibration line and represents the middle boundary line of the overlapping region.
According to some embodiments of the disclosure, the location unit is configured to: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; and determining the relative position of the target relative to the calibration dividing line based on the pixel coordinates of the calibration dividing line and the first and second pixel coordinates of the target as the relative position of the target relative to the coincidence region calibration line.
According to some embodiments of the disclosure, the relative positions include: the target is close to the first imaging device relative to the overlapping region calibration line; the target is close to the second imaging device relative to the coincident region calibration line; and the target is located on the coincidence region calibration line, wherein the coincidence target determining unit is configured to: under the condition that the target is positioned on the overlapping area calibration line, determining that the target belongs to an overlapping target; the tracking unit is configured to: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the condition that the movement direction of the coincident target faces the second imaging device.
According to some embodiments of the disclosure, the tracking unit is further configured to: determining the first imaging device as an imaging device that tracks the target if the target is located proximate to the first imaging device; the second imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the second imaging device.
According to some embodiments of the disclosure, the acquisition unit is configured to: and determining a first calibration line in the first image and determining a second calibration line in the second image, wherein the first calibration line and the second calibration line are used as the overlapping region calibration line, the first calibration line represents the boundary line of the overlapping region in the first image, and the second calibration line represents the boundary line of the overlapping region in the second image.
According to some embodiments of the disclosure, the location unit is configured to: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; determining a relative position of the object with respect to the first and second calibration lines as a relative position with respect to the overlap region calibration line based on the pixel coordinates of the first calibration line, the pixel coordinates of the second calibration line, and the first and second pixel coordinates of the object, wherein the relative position comprises: the target is located outside the overlapping area and is close to the first imaging device; the target is located outside the overlapping region and is close to the second imaging device; and the target is located within the coincident region.
According to some embodiments of the disclosure, the coincidence target determination unit is configured to: determining that the object belongs to a coincident object under the condition that the object is positioned in the coincident region; determining an imaging device that tracks the coincident target comprises: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the case that the movement direction of the coincident target is toward the second imaging device.
According to some embodiments of the disclosure, the tracking unit is further configured to: determining the first imaging device as an imaging device for tracking the target in the case that the target is located outside the overlapping area and is close to the first imaging device; in the case where the target is located outside the overlapping region and is close to the second imaging device, the second imaging device is determined as an imaging device that tracks the target.
According to some embodiments of the present disclosure, the apparatus for object tracking further comprises an image fusion unit configured to image fuse the coincident object based on the first image and the second image if it belongs to the coincident object.
According to yet another aspect of the present disclosure, there is also provided an apparatus for target tracking, including: one or more processors; and one or more memories, wherein the memories have computer readable code stored therein that, when executed by the one or more processors, performs the method for target tracking as described above.
According to yet another aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon instructions that, when executed by a processor, cause the processor to perform the method for target tracking as described above.
With the method for target tracking according to the disclosure, the relative position of the target can be determined based on the coincidence region calibration line of the first image and the second image, so that the imaging device for tracking the coincidence target in the first image and the second image is determined, and resource waste and information interference caused by simultaneous target tracking by the two imaging devices are avoided.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 illustrates a flow chart of a method for target tracking according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a vehicle road collaboration system;
FIG. 3 shows a schematic diagram of a calibration division line according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a first calibration line and a second calibration line according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of an apparatus for target tracking according to an embodiment of the disclosure;
FIG. 6 shows a schematic diagram of an apparatus for target tracking according to an embodiment of the disclosure;
FIG. 7 illustrates a schematic diagram of an architecture of an exemplary computing device, according to an embodiment of the present disclosure;
fig. 8 shows a schematic diagram of a storage medium according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are intended to be within the scope of the present disclosure, based on the embodiments in this disclosure.
The terms "first," "second," and the like, as used in this disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Likewise, the word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
A flowchart is used in this disclosure to describe the steps of a method according to an embodiment of the present disclosure. It should be understood that the steps that follow or before do not have to be performed in exact order. Rather, the various steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Vehicle-road collaboration techniques typically include techniques of environmental awareness, behavioral decisions, path planning, motion control, and the like. The road information obtained based on the vehicle-road cooperation technology can be applied to the vehicle-road cooperation field, and a driver can take the received road information as driving auxiliary information, so that more intelligent and safe driving decisions, motion control and the like can be made according to the driving auxiliary information, and traffic accidents are avoided. As described above, in the field of machine vision-based vehicle-road coordination, it is necessary to continuously arrange imaging devices at certain intervals on both sides of a road for capturing road information of objects such as vehicles, pedestrians, etc. on the road, and to realize tracking of the objects on the road so as to intelligently make driving decisions based on the captured information.
For two imaging devices which are adjacently arranged, an imaging overlapping area exists, and overlapping targets near the overlapping area need to be distinguished to determine the imaging device for tracking the overlapping targets, so that resource waste and information interference caused by simultaneous tracking by the two imaging devices are avoided. The present disclosure provides a method for target tracking for determining an imaging device that tracks an overlay target.
Fig. 1 illustrates a flowchart of a method for target tracking according to some embodiments of the present disclosure, which may be applied to an application scenario of vehicle-road coordination, as an example. The vehicle-road cooperation refers to a road traffic system which adopts technologies such as wireless communication, internet and the like to perform vehicle-to-vehicle and vehicle-to-road dynamic information interaction in an omnibearing manner, and performs active safety control and road cooperation management on the basis of dynamic traffic information acquisition so as to realize effective cooperation of people, vehicles and roads, ensure traffic safety and improve traffic efficiency, thereby forming safety, high efficiency and environmental protection. In other examples, methods according to the present disclosure may also be applied to application scenarios such as autopilot, smart navigation, and the like. The method according to the present disclosure is described herein with vehicle-to-road coordination as one specific example.
Fig. 2 shows a schematic diagram of a vehicle road collaboration system. As shown in fig. 2, a traffic participation subject, an imaging device, a server, and a network transmission system, which are continuously arranged at certain intervals, may be included in the machine vision-based vehicle course coordination system.
The traffic participant may include any object in the road, such as trucks, non-automobiles, pedestrians, and small cars, in a sequential arrangement as shown in fig. 2.
The imaging device may be a first imaging device and a second imaging device located above the road for image acquisition of a target in the road and providing the acquired images to the server. The imaging device may be located in the middle of the roadway, may be located on one side of the roadway, or may be located on both sides of the roadway, without limitation.
The server may be referred to as a vehicle-road cooperative server, and is configured to acquire an image acquired by the imaging device, and perform image processing on the acquired image to calculate road information such as a movement speed, a movement direction, and scene recognition of the target. For example, the vehicle-road coordination server may perform calculations of target identification, tracking, positioning, speed measurement, etc., resulting in scene identification information (such as lane congestion, lane changes, vehicle emergency braking, etc.) and primary information of traffic participant (such as position, movement speed, movement direction, etc.).
After the vehicle-road cooperation server performs image processing, scene recognition and other calculations based on the image provided by the imaging device, the obtained road information is sent to the traffic participation main body through the network transmission system, so that the traffic participation main body can make driving decisions based on the received road information. For example, pedestrians can determine which vehicles pose a threat to themselves and take corresponding avoidance measures. As an example, the network transmission system may be based on a 4G/5G network, or may be based on an LTE-V network, without limitation.
According to the method for object tracking of the present disclosure, as shown in fig. 1, first, in step S101, a first image acquired by a first imaging device and a second image acquired by a second imaging device are obtained, and a coincidence region calibration line of the first image and the second image is determined. According to an embodiment of the present disclosure, the first and second imaging devices may be arranged at a certain interval, such as 15 meters. The first imaging device and the second imaging device have an imaged region of coincidence, the region of coincidence demarcation line representing a boundary of the region of coincidence between the first image and the second image.
Next, in step S102, a relative position of the object included in the first image and/or the second image with respect to the overlapping region criterion line is determined. The targets represent objects in the road such as trucks, non-motor vehicles, pedestrians, and small cars as shown in fig. 2. Regarding the relative position, it will be described in detail below.
Next, in step S103, it is determined whether the target belongs to a coincidence target included in the first image and the second image, based on the relative position. The coincident target represents a target within a coincident region that belongs to both the image acquisition region of the first imaging device and the image acquisition region of the second imaging device, such as a target within the triangular region in fig. 2. In step S104, in the case of belonging to the coincidence target, an imaging device that tracks the coincidence target is determined based on the movement direction of the coincidence target.
Specifically, on the one hand, for an object that belongs only to the image acquisition area of the first imaging device, such as a truck in fig. 2, which is included only in the first image, the server may perform image processing based on only one first image acquired by the first imaging device, thereby obtaining road information related to the truck, such as the current position coordinates of the truck, and may further perform image processing based on a plurality of consecutive first images acquired by the first imaging device, thereby obtaining information on the moving direction, moving speed, and the like of the truck. In this case, the first imaging device may be uniquely determined as the imaging device that tracks the truck. Similarly, for objects that belong only to the image acquisition area of the second imaging device, such as the car in fig. 2, which is only included in the second image, it is possible to uniquely determine the second imaging device as an imaging device that tracks the car.
On the other hand, as described above, there is a coincidence region in the coverage of imaging devices arranged continuously above the road, for example, for a target within a triangle region schematically shown in fig. 2, which belongs to both the image acquisition region of the first imaging device and the image acquisition region of the second imaging device. In other words, the first image acquired by the first imaging device and the second image acquired by the second imaging device each include the object within the triangle area. For objects within the coincident region, such as non-motor vehicles and pedestrians in fig. 2, there are two imaging devices imaging them, and the server needs to determine the imaging device that tracks them, so that image processing is performed based on the images acquired by the imaging devices.
According to an embodiment of the present disclosure, in step S101, determining a coincidence region marking line of the first image and the second image includes: and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
FIG. 3 illustrates a schematic diagram of a calibration division line according to an embodiment of the present disclosure. In the vehicle-road cooperative system shown in fig. 3, the camera 1 may be used as the first imaging device, and the camera 2 may be used as the second imaging device, arranged continuously on both sides of a road. Illustratively, three traffic participant bodies of the small cars C1, C2, C3 are included in the road. Further, in fig. 3, the image acquisition areas of the camera 1 and the camera 2 are schematically represented by two trapezoidal dashed boxes, respectively.
According to the embodiment of the present disclosure, as shown in fig. 3, there is a coincidence region in the image acquisition regions of the camera 1 and the camera 2, and the positioning coordinates of the calibration dividing line, which is the coincidence region calibration line, may be determined based on the positioning coordinates of the markers located between the first imaging device and the second imaging device. The calibration division line may represent a middle division line of the overlap region. In particular, one or more specific road markers located between the first imaging device and the second imaging device may be selected as the markers, such as a light pole, a tree, a traffic sign, etc. These road markers may be captured by camera 1 and camera 2, i.e. the road markers are included in the images captured by both camera 1 and camera 2. After determining the marker, the location coordinates (or physical coordinates) of the marker may be determined, for example, by a location system or manually measuring the physical coordinates of the marker. Next, a straight line (or curve, broken line) as shown in fig. 3 may be established between the markers based on the physical coordinates of the one or more markers to divide the overlapping region into independent regions.
According to an embodiment of the present disclosure, in step S102, determining a relative position of an object included in the first image and/or the second image with respect to the overlapping region criterion line includes: determining the positioning coordinates of the target; and determining the relative position of the target relative to the calibration dividing line based on the positioning coordinates of the calibration dividing line and the positioning coordinates of the target, and taking the relative position of the target relative to the coincidence region calibration line as the relative position of the target. As shown in fig. 3, the relative position of the target with respect to the calibration division line may be determined based on the positioning coordinates of the target and the positioning coordinates of the calibration division line. According to an embodiment of the present disclosure, the relative positions may include: the target is close to the first imaging device relative to the overlapping region calibration line; the target is close to the second imaging device relative to the coincident region calibration line; and the target is positioned on the overlapping region calibration line.
Specifically, determining whether the target is on the calibration division line may include: the size with larger margin is set for the target C2, and whether C2 is on the calibration division line is performed according to the positioning coordinates. For example, if the camera 1 recognizes the tail of the C2, it is calculated that the head of the C2 may be located on the right side of the calibration dividing line according to the preset size of the C2, and if the camera 2 recognizes the head of the C2, it is calculated that the tail of the C2 may be located on the left side of the calibration dividing line according to the preset size of the C2. In this case, it may be determined that the target C2 is located on the calibration division line.
According to an embodiment of the disclosure, in case that the object is determined to be located on the calibration division line, it is determined that the object belongs to the coincidence object. As shown in fig. 3, C2 is located on the calibration division line, i.e. belongs to the coincidence target, and an imaging device for tracking C2 needs to be further determined. According to an embodiment of the present disclosure, for the coincidence target, in a case where a movement direction of the coincidence target is toward a first imaging device, the first imaging device is determined as an imaging device that tracks the coincidence target; and determining the second imaging device as an imaging device for tracking the coincident target in the condition that the movement direction of the coincident target faces the second imaging device.
The method according to the present disclosure may further comprise: the first imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the first imaging device, and the second imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the second imaging device. For example, as shown in fig. 3, an object located on the left side of the calibration division line is tracked by the camera 1, and an object located on the right side of the calibration division line is tracked by the camera 2. Specifically, C1 is located close to the camera 1, and belongs only to the target in the image acquisition area of the camera 1, that is, the camera 1 is used as an imaging device for tracking C1. C3 is located close to said camera 2, which only belongs to objects within the image acquisition area of the camera 2, i.e. the camera 2 acts as an imaging device for tracking C3.
In the method according to the present disclosure, in case that the target can be clearly determined to be located at both sides of the calibration division line, the target can be recognized, positioned, measured, azimuth, scene recognition, etc. calculated based on the image acquired by the corresponding camera, such as C1 and C3 shown in fig. 3. However, in the case where the object is located on the calibration division line, that is, the object crosses the calibration division line, erroneous division is likely to occur due to the difference in the observation angles of the camera 1 and the camera 2. As shown in fig. 3, the tail of the C2 collected by the camera 1 is located at the left side of the calibration dividing line, and the front of the C2 collected by the camera 2 is located at the right side of the calibration dividing line, so that image fusion and de-duplication processing needs to be performed on the target C2 to determine an image device for tracking the target C2.
According to an embodiment of the present disclosure, the method for target tracking may further include: and under the condition of belonging to the coincidence target, carrying out image fusion on the coincidence target based on the first image and the second image. For example, in the case where the server determines that the target C2 is located on the calibration dividing line based on the images acquired by the camera 1 and the camera 2 and based on the preset size of the C2, the server may perform image fusion based on the first image and the second image acquired by the camera 1 and the camera 2 at the same time, for example, may perform image fusion according to the image matching manner, such as performing image matching according to the color, the vehicle type, the position, and the like of the C2. For example, the server may acquire image data near the calibration dividing line of the camera 1 and the camera 2 at the same time, and perform target matching on the image data near the calibration dividing line according to a digital image processing class algorithm, so as to match specific corresponding targets of C2 in the images of the two cameras. Then, the server may determine an imaging device, such as the camera 2, for tracking the fused target based on the image fusion result, that is, calculate road information associated with C2 from the image acquired by the camera 2, and send the road information to C2 through the network transmission system. The above processing is not performed on the image data related to the target C2 acquired by the camera 1.
Thus, the method according to the present disclosure can uniquely determine the imaging device that tracks the overlapping target, so as to avoid resource waste and information interference caused by simultaneous target tracking by two imaging devices. Furthermore, after determining the tracking imaging device, scene recognition of coincident objects can also advantageously be performed. Specifically, in the course of scene recognition, continuous imaging tracking is required and a scene determination is made therefrom. For example, for an emergency braking scene, a vehicle speed value of a coincident target in 5 continuous images is required to be determined, so that after image fusion and duplication removal are performed on the coincident target, the server can determine which targets belong to the same target in the continuous images acquired by the first image and the second image, continuously analyze the motion state of the target based on the images acquired by the determined imaging device, perform scene recognition, and send the scene recognition result to the coincident target.
According to an embodiment of the present disclosure, the determining the coincidence region calibration line of the first image and the second image includes: and determining pixel coordinates of a calibration dividing line based on pixel coordinates of a marker positioned between the first imaging device and the second imaging device in the first image and the second image, wherein the calibration dividing line is used as the overlapping region calibration line and represents the middle boundary line of the overlapping region. In this embodiment, the calibration division line as shown in fig. 3 is determined based on the pixel coordinates, and the specific steps are similar to the process of determining the calibration division line based on the positioning coordinates of the markers as described above, and will not be described again.
The determining the relative position of the object included in the first image and/or the second image with respect to the coincidence region scaling includes: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; and determining the relative position of the target relative to the calibration dividing line based on the pixel coordinates of the calibration dividing line and the first and second pixel coordinates of the target as the relative position of the target relative to the coincidence region calibration line. Specifically, for the calibration division line in fig. 3, it has the third pixel coordinates in the first image and the fourth pixel coordinates in the second image. The server may determine the relative position of the target C1 with respect to the calibration division line, such as to the left of the calibration division line, based on the first pixel coordinates of the target in the first image and the third pixel coordinates of the calibration division line in the first image. Similarly, the server may also determine the relative position of the target C3 with respect to the calibration division line, such as to the right of the calibration division line, based on the second pixel coordinates of the target in the second image and the fourth pixel coordinates of the calibration division line in the second image. Further, for a target located near the calibration division line, such as target C2, the server may determine whether target C2 crosses the calibration division line based on the size of C2 set in advance. In addition, the image fusion of the target C2 based on the first image and the second image may be performed on the target, and the imaging device for tracking the target C2 may be determined according to the movement direction of the target C2, which is similar to the above-described process and will not be described herein.
According to an embodiment of the present disclosure, the determining the coincidence region calibration line of the first image and the second image includes: and determining a first calibration line in the first image and determining a second calibration line in the second image, wherein the first calibration line and the second calibration line are used as the overlapping region calibration line, the first calibration line represents the boundary line of the overlapping region in the first image, and the second calibration line represents the boundary line of the overlapping region in the second image. Fig. 4 shows a schematic diagram of a first calibration line and a second calibration line according to an embodiment of the present disclosure.
As an example, a first calibration line may be set in the first image, a second calibration line may be set in the second image, and the actual overlapping area of the camera 1 and the camera 2 may be represented by an image overlapping area between the first calibration line and the second calibration line, for example, the image overlapping area may be set to be slightly larger than the actual overlapping area.
According to an embodiment of the present disclosure, the determining a relative position of the object included in the first image and/or the second image with respect to the coincidence region scaling includes: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; based on the pixel coordinates of the first calibration line, the pixel coordinates of the second calibration line, and the first and second pixel coordinates of the target, a relative position of the target with respect to the first and second calibration lines is determined as a relative position with respect to the overlap region calibration line. According to an embodiment of the present disclosure, the relative positions include: the target is located outside the overlapping area and is close to the first imaging device; the target is located outside the overlapping region and is close to the second imaging device; and the target is located within the coincident region.
As shown in fig. 4, the server may determine that the target C1 is located on the left side of the first calibration line, i.e., that the target is located outside the overlapping area and near the first imaging device, based on the first pixel coordinates of the target C1 in the first image and the pixel coordinates of the first calibration line in the first image. Furthermore, the server may determine that the target C2 is located on the right side of the second calibration line, i.e. that the target is located outside the overlapping area and close to the second imaging device, based on the second pixel coordinates of the target C3 in the second image and the pixel coordinates of the second calibration line in the second image. According to an embodiment of the present disclosure, the method for target tracking may further include: the first imaging device is determined to be an imaging device that tracks the target if the target is outside the overlapping area and near the first imaging device, and the second imaging device is determined to be an imaging device that tracks the target if the target is outside the overlapping area and near the second imaging device.
According to an embodiment of the present disclosure, the determining whether the target belongs to a coincident target included in the first image and the second image includes: and determining that the object belongs to the coincident object under the condition that the object is positioned in the coincident region. As shown in fig. 4, the server may determine that the object C2 is located on the right side of the first calibration line based on the first pixel coordinates of the object C2 in the first image and the pixel coordinates of the first calibration line in the first image, or may determine that the object C2 is located on the left side of the second calibration line based on the second pixel coordinates of the object C2 in the second image and the pixel coordinates of the second calibration line in the second image, whereby the server may determine that the object C2 is located within the overlapping region, that is, belongs to the overlapping object included in the first image and the second image.
According to an embodiment of the present disclosure, an imaging apparatus for determining a coincident target to track based on a movement direction of the coincident target includes: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the case that the movement direction of the coincident target is toward the second imaging device.
As an example, in a vehicle-road collaboration system, after an object enters the coverage area of an imaging device, such as a server, may assign an identity ID to the object after identifying the object, and continuously track the object by continuously arranging the imaging devices, and in the process that the object travels to the overlapping area of the imaging devices, the identity of the object needs to be forwarded so that the object has a unique identity during the traveling process. In other words, in the case where it is determined that the coincidence target belongs to, image fusion may also be performed on the coincidence target based on the first image and the second image.
Specifically, as shown in fig. 4, in the case where an object is determined in the overlapping area based on the first image acquired by the camera 1, for example, when the object C2 located on the right side of the first calibration line is being tracked by the camera 1 and the moving direction of the object C2 is away from the center area covered by the camera 1, then, for example, the server may extract the positional information of the object C2 and the image information in the first image. Alternatively, in the case where a new target is determined within the overlapping area based on the second image acquired by the camera 2, for example, when the target C2 located on the left side of the second calibration line is not tracked by the camera 2 and the target road traveling direction angle is directed to the center area covered by the camera, the server may extract the position information of the target C2 and the image information in the second image.
The server may then perform image fusion on the target C2 based on the extracted position information and the image information, such as by adopting an image matching algorithm as described above, to determine whether the target C2 acquired in the first image and the target C2 acquired in the second image belong to the same target. Under the condition that the targets belong to the same target, the server can transfer the identity of the target C2 in the tracking process of the camera 1 to the camera 2, and determine that the camera 2 continues to track the target C2, so that the imaging device for tracking the overlapping target can be uniquely determined.
Further, with the camera 1, in the case where a new target is determined in the overlapping area in the first image, that is, when the target is not currently tracked by the camera 1 and the moving direction of the target is far from the central area covered by the camera 1, no processing is ignored. For the camera 2, when a new target is found in the overlapping area, that is, the target is not currently tracked by the camera 2, and the road running direction angle points to the central area covered by the camera 2, the camera tracks the target and assigns an identity to the target.
Based on image fusion in the overlapping area and care-of processing of the identity, the imaging equipment for tracking the overlapping target can be uniquely determined, so that resource waste and information interference caused by simultaneous target tracking by two imaging equipment are avoided. After image fusion and de-duplication are carried out on the overlapped targets, the server can determine which targets belong to the same target in the continuous images acquired by the first image and the second image, continuously analyze the motion state of the target based on the images acquired by the determined imaging equipment, carry out scene recognition, and send the scene recognition result to the overlapped targets.
With the method for target tracking according to the disclosure, the relative position of the target can be determined based on the coincidence region calibration line of the first image and the second image, so that the imaging device for tracking the coincidence target in the first image and the second image is determined, and resource waste and information interference caused by simultaneous target tracking by the two imaging devices are avoided.
According to another aspect of the present disclosure, there is also provided an apparatus for target tracking, fig. 5 shows a schematic diagram of an apparatus for target tracking according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus 1000 for object tracking includes an acquisition unit 1010, a position unit 1020, a coincidence object determination unit 1030, and a tracking unit 1040. The acquisition unit 1010 may be configured to: a first image acquired by a first imaging device and a second image acquired by a second imaging device are obtained and a coincidence region calibration line of the first image and the second image is determined, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image.
The location unit 1020 may be configured to determine a relative location of the object comprised in the first image and/or the second image with respect to the registration area calibration line. The targets represent objects in the road such as trucks, non-motor vehicles, pedestrians, and small cars as shown in fig. 2. Regarding the relative position, it will be described in detail below.
The coincidence target determination unit 1030 may be configured to determine whether the target belongs to a coincidence target included in the first image and the second image based on the relative position. The coincident target represents a target within a coincident region that belongs to both the image acquisition region of the first imaging device and the image acquisition region of the second imaging device, such as a target within the triangular region in fig. 2.
The tracking unit 1040 may be configured to determine an imaging apparatus that tracks a coincident target based on a moving direction of the coincident target in a case where the coincident target belongs to the coincident target. As described above, there is a region of overlap in the coverage of imaging devices arranged consecutively above the road, for example, for a target within a triangular region schematically shown in fig. 2, belonging to both the image acquisition region of the first imaging device and the image acquisition region of the second imaging device. In other words, the first image acquired by the first imaging device and the second image acquired by the second imaging device each include the object within the triangle area. For objects within the coincident region, such as non-motor vehicles and pedestrians in fig. 2, there are two imaging devices imaging them, and the server needs to determine the imaging device that tracks them, so that image processing is performed based on the images acquired by the imaging devices.
According to some embodiments of the present disclosure, the obtaining unit 1010 may be configured to: and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
According to some embodiments of the present disclosure, the location unit 1020 may be configured to: determining the positioning coordinates of the target; and determining the relative position of the target relative to the calibration dividing line based on the positioning coordinates of the calibration dividing line and the positioning coordinates of the target, and taking the relative position of the target relative to the coincidence region calibration line as the relative position of the target.
According to some embodiments of the present disclosure, the obtaining unit 1010 may be configured to: and determining pixel coordinates of a calibration dividing line based on pixel coordinates of a marker positioned between the first imaging device and the second imaging device in the first image and the second image, wherein the calibration dividing line is used as the overlapping region calibration line and represents the middle boundary line of the overlapping region.
According to some embodiments of the present disclosure, the location unit 1020 may be configured to: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; and determining the relative position of the target relative to the calibration dividing line based on the pixel coordinates of the calibration dividing line and the first and second pixel coordinates of the target as the relative position of the target relative to the coincidence region calibration line.
According to some embodiments of the disclosure, the relative positions include: the target is close to the first imaging device relative to the overlapping region calibration line; the target is close to the second imaging device relative to the coincident region calibration line; and the target is positioned on the overlapping region calibration line. The coincidence target determination unit 1030 may be configured to: and under the condition that the target is positioned on the coincidence region calibration line, determining that the target belongs to the coincidence target. The tracking unit 1040 may be configured to: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the condition that the movement direction of the coincident target faces the second imaging device.
According to some embodiments of the present disclosure, the tracking unit 1040 may be further configured to: determining the first imaging device as an imaging device that tracks the target if the target is located proximate to the first imaging device; the second imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the second imaging device.
According to some embodiments of the present disclosure, the obtaining unit 1010 may be configured to: and determining a first calibration line in the first image and determining a second calibration line in the second image, wherein the first calibration line and the second calibration line are used as the overlapping region calibration line, the first calibration line represents the boundary line of the overlapping region in the first image, and the second calibration line represents the boundary line of the overlapping region in the second image.
According to some embodiments of the present disclosure, the location unit 1020 may be configured to: determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image; determining a relative position of the object with respect to the first and second calibration lines as a relative position with respect to the overlap region calibration line based on the pixel coordinates of the first calibration line, the pixel coordinates of the second calibration line, and the first and second pixel coordinates of the object, wherein the relative position comprises: the target is located outside the overlapping area and is close to the first imaging device; the target is located outside the overlapping region and is close to the second imaging device; and the target is located within the coincident region.
According to some embodiments of the present disclosure, the coincidence target determination unit 1030 may be configured to: determining that the object belongs to a coincident object under the condition that the object is positioned in the coincident region; determining an imaging device that tracks the coincident target comprises: determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the case that the movement direction of the coincident target is toward the second imaging device.
According to some embodiments of the present disclosure, the tracking unit 1040 may be further configured to: determining the first imaging device as an imaging device for tracking the target in the case that the target is located outside the overlapping area and is close to the first imaging device; in the case where the target is located outside the overlapping region and is close to the second imaging device, the second imaging device is determined as an imaging device that tracks the target.
As shown in fig. 5, the apparatus 1000 for object tracking may further include an image fusion unit 1050 according to some embodiments of the present disclosure. The image fusion unit 1050 may be configured to perform image fusion of the coincidence target based on the first image and the second image, in a case where the coincidence target belongs to the coincidence target.
According to yet another aspect of the present disclosure, there is also provided an apparatus for target tracking. Fig. 6 shows a schematic diagram of an apparatus 2000 for target tracking according to an embodiment of the present disclosure.
As shown in fig. 6, the apparatus 2000 may include one or more processors 2010, and one or more memories 2020. Wherein said memory 2020 has stored therein computer readable code which, when executed by said one or more processors 2010, can perform a method for object tracking as described above.
Methods or apparatus according to embodiments of the present disclosure may also be implemented by way of the architecture of computing device 3000 shown in fig. 7. As shown in fig. 7, computing device 3000 may include a bus 3010, one or more CPUs 3020, a Read Only Memory (ROM) 3030, a Random Access Memory (RAM) 3040, a communication port 3050 connected to a network, an input/output component 3060, a hard disk 3070, and the like. A storage device in the computing device 3000, such as a ROM 3030 or hard disk 3070, may store various data or files for processing and/or communication use of the method for object tracking provided by the present disclosure and program instructions executed by the CPU. The computing device 3000 may also include a user interface 3080. Of course, the architecture shown in FIG. 7 is merely exemplary, and one or more components of the computing device shown in FIG. 7 may be omitted as may be practical in implementing different devices.
According to yet another aspect of the present disclosure, a computer-readable storage medium is also provided. Fig. 8 shows a schematic diagram 4000 of a storage medium according to the present disclosure.
As shown in fig. 8, the computer storage medium 4020 has stored thereon computer readable instructions 4010. The method for object tracking according to the embodiments of the present disclosure described with reference to the above drawings may be performed when the computer readable instructions 4010 are executed by a processor. The computer-readable storage medium includes, but is not limited to, for example, volatile memory and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the methods described above may be implemented by a program that instructs associated hardware, and the program may be stored on a computer readable storage medium such as a read-only memory, a magnetic or optical disk, etc. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiment may be implemented in the form of hardware, or may be implemented in the form of a software functional module. The present disclosure is not limited to any specific form of combination of hardware and software.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The disclosure is defined by the claims and their equivalents.
Claims (14)
1. A method for target tracking, comprising:
obtaining a first image acquired by a first imaging device and a second image acquired by a second imaging device, and determining a coincidence region calibration line of the first image and the second image, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image;
determining a relative position of an object included in the first image and/or the second image with respect to the coincidence region scaling line;
determining whether the target belongs to a coincident target included in the first image and the second image based on the relative position;
in the case of a coincident target, determining an imaging device for tracking the coincident target based on the direction of movement of the coincident target,
wherein the determining the coincidence region calibration line of the first image and the second image includes:
and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
2. The method of claim 1, wherein the determining the relative position of the object included in the first and/or second image with respect to the coincident region scaling line comprises:
determining the positioning coordinates of the target;
and determining the relative position of the target relative to the calibration dividing line based on the positioning coordinates of the calibration dividing line and the positioning coordinates of the target, and taking the relative position of the target relative to the coincidence region calibration line as the relative position of the target.
3. The method of claim 1, wherein the determining a coincidence region marking line of the first image and the second image comprises:
and determining pixel coordinates of a calibration dividing line based on pixel coordinates of a marker positioned between the first imaging device and the second imaging device in the first image and the second image, wherein the calibration dividing line is used as the overlapping region calibration line and represents the middle boundary line of the overlapping region.
4. A method according to claim 3, wherein said determining the relative position of the object comprised in the first and/or second image with respect to the overlap region calibration line comprises:
determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image;
And determining the relative position of the target relative to the calibration dividing line based on the pixel coordinates of the calibration dividing line and the first and second pixel coordinates of the target as the relative position of the target relative to the coincidence region calibration line.
5. The method of claim 2 or 4, wherein the relative position comprises:
the target is close to the first imaging device relative to the overlapping region calibration line;
the target is close to the second imaging device relative to the coincident region calibration line; and
the target is located on the overlap region calibration line,
wherein determining whether the object belongs to a coincident object included in the first image and the second image based on the relative position comprises: under the condition that the target is positioned on the overlapping area calibration line, determining that the target belongs to an overlapping target;
determining an imaging device that tracks the coincident target comprises:
determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device;
and determining the second imaging device as an imaging device for tracking the coincident target in the condition that the movement direction of the coincident target faces the second imaging device.
6. The method of claim 5, further comprising:
determining the first imaging device as an imaging device that tracks the target if the target is located proximate to the first imaging device;
the second imaging device is determined to be an imaging device that tracks the target if the target is located proximate to the second imaging device.
7. The method of claim 1, wherein the determining a coincidence region marking line of the first image and the second image comprises:
and determining a first calibration line in the first image and determining a second calibration line in the second image, wherein the first calibration line and the second calibration line are used as the overlapping region calibration line, the first calibration line represents the boundary line of the overlapping region in the first image, and the second calibration line represents the boundary line of the overlapping region in the second image.
8. The method of claim 7, wherein the determining the relative position of the object included in the first and/or second image with respect to the coincident region scaling line comprises:
determining a first pixel coordinate of the target in a first image and a second pixel coordinate in a second image;
Determining a relative position of the object with respect to the first and second calibration lines as a relative position with respect to the overlap region calibration line based on the pixel coordinates of the first calibration line, the pixel coordinates of the second calibration line and the first and second pixel coordinates of the object,
wherein the relative positions include:
the target is located outside the overlapping area and is close to the first imaging device;
the target is located outside the overlapping region and is close to the second imaging device; and
the target is located within the coincident region.
9. The method of claim 8, wherein the determining whether the object belongs to a coincident object included in the first and second images comprises:
determining that the object belongs to a coincident object under the condition that the object is positioned in the coincident region;
determining an imaging device that tracks the coincident target comprises:
determining the first imaging device as an imaging device for tracking the coincident target under the condition that the movement direction of the coincident target faces the first imaging device; and determining the second imaging device as an imaging device for tracking the coincident target in the case that the movement direction of the coincident target is toward the second imaging device.
10. The method of claim 8, further comprising:
determining the first imaging device as an imaging device for tracking the target in the case that the target is located outside the overlapping area and is close to the first imaging device;
in the case where the target is located outside the overlapping region and is close to the second imaging device, the second imaging device is determined as an imaging device that tracks the target.
11. The method of claim 1, further comprising: and under the condition of belonging to the coincidence target, carrying out image fusion on the coincidence target based on the first image and the second image.
12. An apparatus for target tracking, comprising:
an acquisition unit configured to acquire a first image acquired by a first imaging device and a second image acquired by a second imaging device, and determine a coincidence region calibration line of the first image and the second image, wherein the coincidence region calibration line represents a boundary line of a coincidence region between the first image and the second image;
a position unit configured to determine a relative position of a target included in the first image and/or the second image with respect to the overlapping region criterion line;
A coincidence target determining unit configured to determine whether the target belongs to a coincidence target included in the first image and the second image based on the relative position;
a tracking unit configured to determine an imaging apparatus that tracks a coincidence target based on a moving direction of the coincidence target in a case where the coincidence target belongs,
the acquisition unit is further configured to: and determining positioning coordinates of a calibration dividing line based on the positioning coordinates of the marker between the first imaging device and the second imaging device, wherein the calibration dividing line is used as the overlapping region calibration line and represents an intermediate dividing line of the overlapping region.
13. An apparatus for target tracking, comprising:
one or more processors; and
one or more memories having stored therein computer readable code which, when executed by the one or more processors, performs the method for object tracking of any of claims 1-11.
14. A computer readable storage medium having instructions stored thereon, which when executed by a processor, cause the processor to perform the method for object tracking according to any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911065424.5A CN110807804B (en) | 2019-11-04 | 2019-11-04 | Method, apparatus, device and readable storage medium for target tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911065424.5A CN110807804B (en) | 2019-11-04 | 2019-11-04 | Method, apparatus, device and readable storage medium for target tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110807804A CN110807804A (en) | 2020-02-18 |
CN110807804B true CN110807804B (en) | 2023-08-29 |
Family
ID=69501145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911065424.5A Active CN110807804B (en) | 2019-11-04 | 2019-11-04 | Method, apparatus, device and readable storage medium for target tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110807804B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104660998A (en) * | 2015-02-16 | 2015-05-27 | 苏州阔地网络科技有限公司 | Relay tracking method and system |
CN105427338A (en) * | 2015-11-02 | 2016-03-23 | 浙江宇视科技有限公司 | Moving object tracking method and device |
CN107240124A (en) * | 2017-05-19 | 2017-10-10 | 清华大学 | Across camera lens multi-object tracking method and device based on space-time restriction |
WO2018077050A1 (en) * | 2016-10-27 | 2018-05-03 | 深圳市道通智能航空技术有限公司 | Target tracking method and aircraft |
CN108174152A (en) * | 2017-12-28 | 2018-06-15 | 深圳英飞拓科技股份有限公司 | A kind of target monitoring method and target monitor system |
CN108198199A (en) * | 2017-12-29 | 2018-06-22 | 北京地平线信息技术有限公司 | Moving body track method, moving body track device and electronic equipment |
CN110276789A (en) * | 2018-03-15 | 2019-09-24 | 杭州海康威视系统技术有限公司 | Method for tracking target and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109716256A (en) * | 2016-08-06 | 2019-05-03 | 深圳市大疆创新科技有限公司 | System and method for tracking target |
-
2019
- 2019-11-04 CN CN201911065424.5A patent/CN110807804B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104660998A (en) * | 2015-02-16 | 2015-05-27 | 苏州阔地网络科技有限公司 | Relay tracking method and system |
CN105427338A (en) * | 2015-11-02 | 2016-03-23 | 浙江宇视科技有限公司 | Moving object tracking method and device |
WO2018077050A1 (en) * | 2016-10-27 | 2018-05-03 | 深圳市道通智能航空技术有限公司 | Target tracking method and aircraft |
CN107240124A (en) * | 2017-05-19 | 2017-10-10 | 清华大学 | Across camera lens multi-object tracking method and device based on space-time restriction |
CN108174152A (en) * | 2017-12-28 | 2018-06-15 | 深圳英飞拓科技股份有限公司 | A kind of target monitoring method and target monitor system |
CN108198199A (en) * | 2017-12-29 | 2018-06-22 | 北京地平线信息技术有限公司 | Moving body track method, moving body track device and electronic equipment |
CN110276789A (en) * | 2018-03-15 | 2019-09-24 | 杭州海康威视系统技术有限公司 | Method for tracking target and device |
Also Published As
Publication number | Publication date |
---|---|
CN110807804A (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhu et al. | Overview of environment perception for intelligent vehicles | |
Chougule et al. | Reliable multilane detection and classification by utilizing cnn as a regression network | |
CN103021186B (en) | Vehicle monitoring method and vehicle monitoring system | |
US20210319236A1 (en) | Semantically aware keypoint matching | |
US8751154B2 (en) | Enhanced clear path detection in the presence of traffic infrastructure indicator | |
CN111316286A (en) | Trajectory prediction method and device, storage medium, driving system and vehicle | |
JP2020052694A (en) | Object detection device, object detection method, and computer program for object detection | |
CN108460968A (en) | A kind of method and device obtaining traffic information based on car networking | |
CN108133484B (en) | Automatic driving processing method and device based on scene segmentation and computing equipment | |
JP7454685B2 (en) | Detection of debris in vehicle travel paths | |
CN107836017A (en) | Semaphore identification device and semaphore recognition methods | |
US20220406077A1 (en) | Method and system for estimating road lane geometry | |
Dong et al. | Mcity data collection for automated vehicles study | |
Batista et al. | Lane detection and estimation using perspective image | |
Xu et al. | Road lane modeling based on RANSAC algorithm and hyperbolic model | |
CN110765224A (en) | Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment | |
Dinh et al. | Development of a tracking-based system for automated traffic data collection for roundabouts | |
CN111754388A (en) | Picture construction method and vehicle-mounted terminal | |
CN110648538B (en) | Traffic information sensing system and method based on laser radar network | |
Rubaiyat et al. | Multi-sensor data fusion for vehicle detection in autonomous vehicle applications | |
Suganuma et al. | Current status and issues of traffic light recognition technology in autonomous driving system | |
CN110807804B (en) | Method, apparatus, device and readable storage medium for target tracking | |
JP2021064056A (en) | Zebra zone recognition device, vehicle control device, zebra zone recognition method, and program | |
Bu et al. | Toward map updates with crosswalk change detection using a monocular bus camera | |
Beresnev et al. | Automated Driving System based on Roadway and Traffic Conditions Monitoring. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40021379 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |