CN117726656B - Target tracking method, device, system and medium based on super-resolution image - Google Patents
Target tracking method, device, system and medium based on super-resolution image Download PDFInfo
- Publication number
- CN117726656B CN117726656B CN202410176096.0A CN202410176096A CN117726656B CN 117726656 B CN117726656 B CN 117726656B CN 202410176096 A CN202410176096 A CN 202410176096A CN 117726656 B CN117726656 B CN 117726656B
- Authority
- CN
- China
- Prior art keywords
- image
- super
- target
- resolution
- transformation matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000009466 transformation Effects 0.000 claims abstract description 90
- 239000011159 matrix material Substances 0.000 claims abstract description 79
- 238000001514 detection method Methods 0.000 claims description 110
- 238000012545 processing Methods 0.000 claims description 23
- 238000012937 correction Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 22
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000026676 system process Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a target tracking method, device, system and medium based on super-resolution images, wherein the method is applied to a seeker system, and the method comprises the following steps: acquiring a first image and determining a transformation matrix for the first image; correcting the first image according to the transformation matrix to obtain a second image; performing super-resolution reconstruction on the second image to obtain a first super-resolution image corresponding to the second image; and tracking the target area in the first super-resolution image. By adopting the method, the target area can be accurately positioned and tracked.
Description
Technical Field
The invention relates to the technical field of image guidance, in particular to a target tracking method, device, system and medium based on super-resolution images.
Background
In the related art, the strapdown image guidance can acquire the field-of-view photo in real time through the camera strapdown on the aircraft, but due to the influence of long distance, large field of view and aircraft shake, the targets in the field-of-view photo cannot be accurately captured, and the targets are difficult to track.
Disclosure of Invention
Based on the above, it is necessary to provide a target tracking method, apparatus, electronic device and medium based on super-resolution image, which are capable of accurately capturing and tracking a target.
A target tracking method based on super-resolution image is applied to a seeker system, and comprises the following steps:
in the above aspect, the determining a transformation matrix for the first image includes:
extracting angular points of the first image and the third image to respectively obtain a first angular point set corresponding to the first image and a second angular point set corresponding to the third image; the third image characterizes a previous frame of the first image; the first corner set and the second corner set are used for recording the positions of the corner points;
Extracting a first direction gradient histogram feature corresponding to each corner in the first corner set and extracting a second direction gradient histogram feature corresponding to each corner in the second corner set;
matching the first corner set and the second corner set according to the first direction gradient histogram feature and the second direction gradient histogram feature to obtain a third corner set; the third corner set is used for recording matched corner points;
And calculating to obtain the transformation matrix according to the third angle point set.
In the above solution, the tracking the target area in the first super-resolution image includes:
Performing target detection on each region in the first super-resolution image based on the trained target detection model to obtain a first target detection result of each region in the first super-resolution image; wherein each region in the first super-resolution image is divided by block sliding;
combining the first target detection results of all areas in the first super-resolution image to obtain a second target detection result related to the first super-resolution image;
determining the target area in the first super-resolution image according to the second target detection result;
And tracking the target area based on a correlation filtering method.
In the above aspect, the determining the target area in the first super-resolution image according to the second target detection result includes any one of the following:
determining a region with highest confidence in the second target detection result as the target region;
Determining a region with the largest overlapping area in the second target detection result and the third target detection result as the target region; the third target detection result represents a target detection result of a previous frame image of the first image;
Transmitting the second target detection result to a ground workstation, receiving a feedback instruction of the ground workstation about the second target detection result, and determining the target area according to the feedback instruction; the feedback instruction is used for selecting a target area from the second target detection result;
determining a region with the size matched with the set size in the second target detection result as the target region; the set size is estimated based on the eye distance.
In the above aspect, the acquiring the first image and determining the transformation matrix related to the first image includes:
Transmitting the first image to a ground workstation and receiving feedback information returned by the ground workstation; the feedback information includes the transformation matrix; the transformation matrix is determined for the ground station based on the first image.
In the above scheme, the feedback information further includes the target area, and the correcting the first image according to the transformation matrix to obtain a second image includes;
correcting the target area in the first image according to the transformation matrix to obtain the second image corresponding to the target area; the target area is determined by target detection of the ground workstation on the second super-resolution image; the second super-resolution image is obtained by performing correction processing and super-resolution reconstruction processing on the first image based on the ground workstation.
A target tracking method based on super-resolution images, applied to a ground workstation, the method comprising:
Receiving a first image transmitted by a seeker system and determining a transformation matrix for the first image;
Correcting the first image according to the transformation matrix to obtain a fourth image;
performing super-resolution reconstruction on the fourth image to obtain a second super-resolution image corresponding to the fourth image;
and determining a target area in the second super-resolution image, packaging the target area into target information, and transmitting the target information to the seeker system so that the seeker system tracks the target area.
A super-resolution image-based target tracking device applied to a seeker system, comprising:
a determination module for acquiring a first image and determining a transformation matrix for the first image;
the correction module is used for correcting the first image according to the transformation matrix to obtain a second image;
The reconstruction module is used for carrying out super-resolution reconstruction on the second image to obtain a first super-resolution image corresponding to the second image;
and the tracking module is used for tracking the target area in the first super-resolution image.
A seeker system includes a memory storing a computer program and a processor implementing the steps of the super-resolution image-based target tracking method described above when the processor executes the computer program.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described super-resolution image-based target tracking method.
According to the target tracking method, device, electronic equipment and medium based on the super-resolution image, the acquired image is corrected and super-resolution reconstructed through the seeker system, and the target is tracked on the super-resolution image, so that the influence of the aircraft shake on the image can be reduced, the characteristics of the target in the image are enhanced, and the accuracy of detecting and tracking the target is further improved.
Drawings
FIG. 1 is a flow chart of a target tracking method based on super-resolution images in one embodiment;
FIG. 2 is a flow diagram of determining a transformation matrix for a first image in one embodiment;
FIG. 3 is a flow diagram of tracking a target area in one embodiment;
FIG. 4 is a flow chart of a target tracking method based on super-resolution images according to another embodiment;
Fig. 5 is a block diagram of a target tracking device based on super-resolution images in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Before describing the technical scheme of the application in detail, the image guidance in the related art is first briefly described.
In the related art, the application of image guidance in the field of aircraft control is becoming more and more important, wherein strapdown image guidance is one of the ways of image guidance. The strapdown image guidance is a guidance mode which utilizes the combination of an inertial navigation system and an image processing technology, and can realize more accurate target positioning and navigation by combining the data of the inertial navigation system with the image processing technology.
In the image guidance, the size and the range of the field of view field are very important for the identification and the positioning of targets, because the field of view field determines the area and the range which can be covered by the system. Under the general condition, the strapdown image guidance has a larger field of view and contains a target, and after the field of view image is acquired, the target needs to be further locked to acquire the target position, so that the aircraft is controlled to fly to the target according to the target position.
However, in the current strapdown image guidance scheme, due to the fact that the distance is far, the view field is large, the background area in the view field image is extremely high, the foreground object is small in occupied image area and unobvious in characteristics, and the background area difference in different combat scenes is large, therefore, the situation that the captured object is wrong or the capturing of the object fails in object identification can occur, and due to the fact that the strapdown image is affected by the shake of an aircraft, the acquired view field image is noisier, the difficulty of object capturing can be increased, and the object is difficult to track.
Based on the above, the target tracking method based on the super-resolution image provided by the embodiment of the application can accurately position and track the target.
Implementation details of the technical scheme of the embodiment of the present application are described in detail below.
The application provides a target tracking method based on a super-resolution image, as shown in fig. 1, fig. 1 shows a flow diagram of the target tracking method based on the super-resolution image, wherein the target tracking method based on the super-resolution image can comprise the following steps:
step S101, a first image is acquired, and a transformation matrix for the first image is determined.
Here, the first image is acquired using a seeker system. First, it is necessary to activate the seeker system, ensure that the seeker system can function properly, set image acquisition parameters such as resolution, frame rate, exposure time, etc., and start image acquisition by timing triggers or external signal triggers. The first image is a field-of-view image, that is, an image of a specific area captured by the seeker system through a camera or other sensor.
In practical applications, the images acquired by the seeker system may be affected by factors such as posture change, visual angle change or camera distortion, and lens distortion (such as radial distortion and tangential distortion) of the camera may also introduce image distortion, so that the acquired first images need to be registered and distortion corrected by using a transformation matrix, so that the first images can be compared and analyzed under the same coordinate, and the shape of an object in the first images is kept true.
In practical applications, there are various methods for determining a transformation matrix related to a first image, where the transformation matrix of the first image may be determined by using a feature point matching method, where feature points are extracted from two images and matched, and the transformation matrix may be calculated using the matching points, where commonly used feature points include corner points, edge points, and the like. The process of determining the transformation matrix is described in detail below with reference to fig. 2.
In one embodiment, as shown in fig. 2, fig. 2 shows a schematic flow chart of determining a transformation matrix for a first image.
Step S201, extracting corner points of the first image and the third image, and respectively obtaining a first corner point set corresponding to the first image and a second corner point set corresponding to the third image.
It will be appreciated that the feature point matching needs to be performed on two different images, in this embodiment, the feature point matching is performed between a first image and a third image, where the first image is the image currently acquired by the seeker system, and the third image is the previous frame image of the first image.
Extracting the corner points in the first image to obtain a first corner point set, and extracting the corner points in the third image to obtain a second corner point set. The first and second corner sets are position coordinates for registering each corner point, and are generally expressed in the form of (x, y).
The corner points refer to points with significant features in images, usually corners or crossing points in the images, and have obvious corresponding relations in different images. In practical applications, the corner points in the image can be identified by calculating the local Features in the image, and algorithms commonly used for extracting the corner points include Harris (Harris) corner point detection, stokes (Shi-Tmasi) corner point detection, FAST (Features from ACCELERATED SEGMENT TEST) corner point detection, and the like, which can detect the corner points in the image and return the position coordinates of each corner point.
Step S202, extracting a first direction gradient histogram feature corresponding to each corner in the first corner set, and extracting a second direction gradient histogram feature corresponding to each corner in the second corner set.
Here, each corner in the first corner set is analyzed, a first direction gradient histogram feature corresponding to each corner in the first corner set is determined, each corner in the second corner set is analyzed, and a second direction gradient histogram feature corresponding to each corner in the second corner set is determined.
For each detected corner point, a local area (also called a window) with a fixed size can be defined, usually a square or round area taking the corner point as a center, for each local area, the gradient and direction of the pixel point inside the local area can be calculated, a Sobel (Sobel) operator or other gradient operators can be used for calculating the gradient and direction, the local area is divided into a plurality of small cells (cells), usually with the size of 8 x 8 pixels, for the pixel point inside each small cell, the gradient direction information of the pixel point inside each small cell is distributed into a certain number of direction intervals (bin), a direction gradient histogram is formed, the direction gradient histogram of each small cell is normalized to reduce the influence of factors such as illumination and shadow, and finally, the direction gradient histograms of all small cells are spliced into a long vector, namely the direction gradient histogram feature of the corner point position.
Through the steps, the characteristics of the directional gradient histograms of the positions of each angular point in the first image and the third image can be calculated, and the characteristics can be used for tasks such as target detection and object identification. Since the directional gradient histogram feature is a local feature, the directional gradient histogram feature calculation for each corner position is typically performed for a local region around the corner.
Step S203, matching the first corner set and the second corner set according to the first direction gradient histogram feature and the second direction gradient histogram feature to obtain a third corner set.
Here, the first direction gradient histogram feature corresponding to one corner in the first corner set is matched with the second direction gradient histogram feature corresponding to all corners in the second corner set, and after the matching of all corners in the first corner set is completed, a third corner set can be obtained, wherein the third corner set is the corner in which the matching of the first direction gradient histogram feature and the second direction gradient histogram feature is recorded.
In practical application, the nearest neighbor method is adopted for corner matching. For each corner in the first corner set, calculating the similarity or distance between the corresponding first direction gradient histogram feature and the second direction gradient histogram feature of all the corners in the second corner set, comparing and matching the direction gradient histogram features by using Euclidean distance, cosine similarity and other methods, selecting a certain threshold according to the calculation result of the similarity or distance to screen matched corners, for example, setting a threshold, and considering that the matching of the two corners is successful only when the similarity of the direction gradient histograms of the two corners is higher than the threshold. For corner points that match successfully, they are formed into a new paired set of points (i.e., the third set of points) for subsequent applications.
It should be noted that, in the matching process, a threshold value of similarity or distance is generally required to be considered to screen the matched corner points, and in addition, the accuracy and stability of the matching result may be affected by the feature vector comparison method and the threshold value selection in the matching process.
Step S204, according to the third angle point set, a transformation matrix is obtained through calculation.
Here, after performing corner matching, applications such as image registration, motion estimation, and the like may be implemented by calculating a transformation matrix. The method for calculating the transformation matrix generally includes the following methods:
(1) Direct calculation: for corner points that match successfully, the transformation matrix between them can be calculated directly. For example, in two-dimensional image registration, affine transformation or perspective transformation may be used to describe transformation relationships between images, which are achieved by computing transformation matrices between matching corner points.
(2) Random sample consensus algorithm (RANSAC, random Sample Consensus): the RANSAC algorithm may be used to calculate a transformation matrix between the matched corner points, and find a set of optimal matched corner points by means of random sampling and model fitting, so as to calculate an optimal transformation matrix.
(3) Least squares method: in some special cases, a least squares method may be used to calculate the transformation matrix between the matched corner points. For example, in two-dimensional image registration, parameters of affine transformation or perspective transformation may be calculated using a least two penalty if correspondence between matching corner points is known.
It should be noted that, different methods for calculating the transformation matrix are suitable for different scenes and applications, and selecting a suitable method can improve calculation accuracy and efficiency, and in addition, in the process of calculating the transformation matrix, attention is required to be paid to the correspondence between the matched corner points so as to ensure the accuracy of the calculation result.
Step S102, correcting the first image according to the transformation matrix to obtain a second image.
Here, the first image is corrected using the calculated transformation matrix. Common transformation matrices include affine transformation matrices and perspective transformation matrices.
After determining the transformation matrix, the transformation matrix may be applied to the first image, for affine transformation, the affine transformation matrix may be used to translate, rotate, scale, etc., and for perspective transformation, the perspective transformation matrix may be used to perspective correct the image.
In the process of image correction, interpolation processing needs to be performed on pixels in the first image to ensure the quality of the transformed image, and common interpolation methods include bilinear interpolation, bicubic interpolation, and the like.
In practical applications, the calculation and application of the transformation matrix may be implemented using functions provided by the image processing library, and by applying the transformation matrix to the original image, a corrected image may be obtained.
Step S103, performing super-resolution reconstruction on the second image to obtain a first super-resolution image corresponding to the second image.
Super-resolution reconstruction here refers to the process of reconstructing a high resolution image from one or more low resolution images, which is advantageous for accurate positioning and tracking of the target in image guidance.
The super-resolution reconstruction of the second image obtained through the correction processing is to increase details and definition of the image through an image processing algorithm, so as to obtain a high-resolution image, wherein the super-resolution reconstruction can be realized in various modes, including an interpolation-based method, a depth learning-based method and the like.
The first super-resolution image has a higher resolution than the first image, so that the first super-resolution image can record clearer and more detailed details, which is very useful for various positioning targets in the first super-resolution image.
In practical application, the second image may be input into a Super-resolution neural network model based on a Super-resolution residual network (SRResnet, super-Resolution Residual Network) framework, for implementing Super-resolution reconstruction of the image, so as to obtain the first Super-resolution image.
SSResnet is a super-resolution reconstruction model based on deep learning, the basic idea is to learn the mapping relation from a low-resolution image to a high-resolution image through a convolutional neural network, so as to realize super-resolution reconstruction of the image, and the training process of the super-resolution neural network model is described in detail below.
(1) Data preparation: first, a target image dataset needs to be prepared, comprising low resolution images and corresponding high resolution images, which will be used for training the super resolution neural network model.
(2) Model training: the prepared training dataset is trained using SRResnet frames or other similar super-resolution neural network models. In the training process, the model learns the mapping relation between the low-resolution image and the high-resolution image so as to be capable of carrying out super-resolution reconstruction on the new low-resolution image.
(3) Model evaluation: after the model is trained, it needs to be evaluated to ensure that it performs well on super-resolution reconstruction tasks. Typically, the evaluation is performed using a validation data set, and the evaluation index may include peak signal-to-noise ratio, structural similarity, and the like.
Under the condition that the performance of the trained model can meet the set requirement, super-resolution reconstruction can be performed on the new low-resolution image by using the trained model. The model will enhance the detail features of the image according to the learned mapping relationship, thereby obtaining a high resolution image.
In practical applications, training of the SRResnet framework super-resolution neural network model and image super-resolution reconstruction can be achieved using tools and libraries provided by the deep learning framework.
Step S104, tracking the target area in the first super-resolution image.
In the first image, the target area is smaller, the characteristics are not obvious, so that certain difficulty exists in detection and tracking of the target area, and different image characteristics can be clearly recorded in the first super-resolution image, so that the detection and tracking accuracy of the target area can be improved.
The target region refers to a region in which a target or object of interest, which may be a different type of object, a human face, a vehicle, an animal, etc., is located in the image. In this embodiment, the target area is an area that needs to be located, identified, or tracked. The target area is typically comprised of pixels in an image, which can be identified, located, and tracked by image processing and computer vision algorithms.
Tracking the target region in the first super-resolution image refers to tracking the position and size of the target object in successive frames. Specifically, feature extraction is performed on the target area to obtain features of the target area, the features of the target area are matched with features of the target area in a previous frame of image, an optimal matching result is found, the position of the target area in the first super-resolution image is calculated according to the matching result, the position information of the target area is updated, and in practical application, tracking algorithms (such as Kalman filtering, particle filtering and the like) can be used for predicting and smoothing the target position, so that tracking stability and accuracy are improved.
It should be noted that, the first super-resolution image obtained after the super-resolution reconstruction provides more detailed information, which is helpful for improving the accuracy of target detection and tracking, however, the processing of the first super-resolution image also increases the computational cost and complexity, so that a proper algorithm and method need to be selected to complete the target tracking task.
In one embodiment, as shown in FIG. 3, FIG. 3 shows a schematic flow diagram for tracking a target area.
Step S301, performing target detection on each region in the first super-resolution image based on the trained target detection model to obtain a first target detection result of each region in the first super-resolution image.
Here, when the target area is detected based on the first super-resolution image, since the resolution of the first super-resolution image is high, the direct target detection of the entire image may result in an excessive amount of computation, and thus the first super-resolution image may be processed in a block-sliding manner.
Firstly, dividing a first super-resolution image into a plurality of blocks with equal size by utilizing a block sliding mode, extracting each region in the first super-resolution image by adopting the block sliding mode, and inputting the extracted region into a target detection model after training, so that target detection can be carried out on each region, and a first target detection result corresponding to each region is obtained.
It should be noted that, in the process of sliding the blocks, a situation that the target spans multiple areas may occur, and at this time, special processing needs to be performed on the target that spans the boundary, so as to ensure accuracy of the detection result.
The training process of the object detection model is described below.
First, a target image dataset needs to be prepared, comprising training images with labels and corresponding label files, the labels typically comprising bounding box coordinates of the target as well as class information. In this embodiment, a suitable YOLOv model is selected as a base model, and models with different sizes can be selected according to actual requirements. The YOLOv model is trained using the prepared training dataset, during which the model will learn how to accurately detect the target in the image and continually adjust the model parameters to improve detection accuracy. After the model is trained, the model is evaluated using the validation dataset to ensure that it performs well on the target detection task. The evaluation index may include accuracy rate, recall rate, average accuracy mean, etc. Under the condition that the evaluation index of the trained model passes the preset requirement, the trained model can be applied to target detection.
Step S302, combining the first target detection results of all areas in the first super-resolution image to obtain a second target detection result related to the first super-resolution image.
After obtaining the first target result of each region, merging the first target results of each region, wherein the following method may be adopted for merging:
a: and projecting the target detection result of each block onto the first super-resolution image to obtain the position information and the confidence information of each target on the first super-resolution image.
B: for overlapping targets, some method may be used for merging, for example, taking the target with the highest confidence as the final result.
C: for non-overlapping targets, they are added directly to the final detection result.
After the first target detection results of the areas are combined in the mode, a second target detection result can be finally obtained, wherein the second target detection result is the target detection result of the whole first super-resolution image.
It should be noted that, when merging the first target detection results of the respective areas, the overlapping areas between the respective areas need to be considered to avoid repeated detection of the same target. Meanwhile, the size of the region and the size of the overlapping region need to be paid attention to in order to ensure the accuracy of the second detection result.
Step S303, determining a target area in the first super-resolution image according to the second target detection result.
After the second target detection result is determined, a target area, that is, an area to be focused on, may be selected from the second target detection result, where the second target detection result includes position information of the target area, including information such as coordinates, width, and height of the target area, and the target area is determined in the first super-resolution image according to the position information of the target area, and may be mapped onto the first super-resolution image by performing coordinate transformation on the position information to obtain pixel coordinates of the target area, so that the target area may be determined in the first super-resolution image.
In one embodiment, the determination of the target area may be determined in the following manner.
In the first mode, the region with the highest confidence in the second target detection result is determined as the target region.
The result of the object detection is typically a rectangular frame surrounding the object, the position and size of which can be used to determine the position and size of the object. In the case where multiple targets are detected, the confidence may be used to determine which target is most likely. Specifically, a plurality of targets in the second target detection result can be processed according to the order of the confidence level from high to low, the target with the highest confidence level is found, and a rectangular frame corresponding to the target is used as a target area and can be used for subsequent target tracking.
In the second aspect, the region having the largest overlapping area among the second target detection result and the third target detection result is determined as the target detection region.
Here, the third target detection result is a target detection result of the previous frame image. All targets in the second target detection result and the third target detection result are matched first, and identifiers or other specific attributes of the targets can be used for matching. And for each matched target, respectively finding out corresponding rectangular frames in the second target detection result and the third target detection result, and calculating the overlapping area of the two rectangular frames. And finding a rectangular frame with the largest overlapping area as a target area, wherein if the overlapping area is smaller than a threshold value, the two detection results can be considered to be not successfully matched, and the target area needs to be determined by re-carrying out target detection or adopting other methods.
The third object detection result is the object detection result of the previous frame image, so that the motion state of the object, such as the position and the speed of the object, needs to be considered when matching is performed. If the motion state of the target changes, more complex methods may be required for target tracking and region determination.
In a third approach, the area designated by the ground station is determined as the target area.
Here, the detected second target detection result may be transmitted to the ground workstation, and after the ground workstation receives the second target detection result, the operator may analyze the target area, select the target area from the target area, and return a feedback instruction, where the feedback instruction may include confirmation of the target area, even re-instruct to perform a new round of target detection, and so on.
The seeker system can determine the target area in the first super-resolution image according to the feedback instruction of the ground workstation, and the determination mode of the target area can ensure that ground operators can monitor and adjust the target area in real time.
In a fourth aspect, an area matching the set size is determined as the target area.
Here, the target size can be estimated from the shot distance, which generally refers to the distance between the lead weapon and the target, to obtain the set size. In an image guidance system, a camera or sensor captures an image of a target and determines the distance between the missile and the target, which is the missile distance, through an image processing algorithm. Under the condition of determining the bullet distance, the size of the target on the image can be estimated according to the bullet distance, so that in the process of determining the target area, the second target detection result can be screened according to the set size, and the area with the size similar to the set size is determined as the target area.
Step S304, tracking the target area based on a correlation filtering method.
Here, the target area is tracked, and in practical applications, there are many different methods used for target tracking, including target tracking based on correlation filtering, target tracking based on optical flow, kalman filter, etc., and in this embodiment, the method of selecting correlation filtering to track the target area, correlation filtering performs correlation calculation by using a template and a current frame image block, and updates the template according to the correlation to achieve target tracking.
The following describes the process of tracking the target area by the correlation filtering method in detail.
(1) Initializing: first, a target template, which may be an image block of the target region, needs to be initialized. Typically, the target region may be manually selected in the first frame as the initial template.
(2) Extracting characteristics: for each frame of image, features such as gray values, gradient information, and the like need to be extracted around the target region.
(3) Calculating a response: using the target template and the extracted features, a similarity score is calculated for the target template to the current image block, which may be accomplished by calculating a correlation.
(4) Updating the template: according to the calculated similarity score, the weight of the target template can be updated so that the template better matches the target region.
(5) Target positioning: and positioning the target area in the next frame of image according to the updated template.
(6) Iterative tracking: repeating the steps (2) to (5) until the target area is successfully tracked.
In the present embodiment, the positioning and tracking of the target area are performed on the first super-resolution image, and since the first super-resolution image can provide clearer and more features and details, the positioning of the target area can be performed better and faster in the process of performing the positioning and tracking of the target area, and the accuracy of tracking the target area can be improved.
In the above embodiment, the correction and super-resolution reconstruction of the first image by the seeker system, and the identification of the target region can be divided into two cases, the first case is where the seeker system processes the first image alone for target tracking, and the second case is where the seeker system cooperates with the ground workstation for target tracking.
In the first case, the seeker system would perform correction processing and super-resolution reconstruction on the entire image, as in the embodiments described above, and determine and track the target region on the resulting super-resolution image. This situation places a greater demand on the resources of the seeker system. In practice, however, the seeker system may be used in conjunction with a surface workstation to reduce the resources used on the seeker system in view of the shortage of resources in the seeker system. In this case the head system will perform a correction process and super-resolution reconstruction of the target area in the first image, where the target area is determined by the surface workstation, in the second case the head system processes the image, and in the first case the head system processes the image with some different implementation details.
The workflow of the seeker system in the context of use with a ground station is described in detail below.
In one embodiment, in a scenario where the seeker system is used in conjunction with a ground workstation, the seeker system transmits the acquired first image to the ground workstation, the ground workstation processes the first image, wherein the ground workstation is able to determine a transformation matrix for the first image and return the determined transformation matrix to the seeker system via feedback information, such that the seeker system is able to determine the transformation matrix for the first image directly from the feedback information without the seeker system determining the transformation matrix by processing the first image, thereby saving resources of the seeker system.
In one embodiment, the feedback information received by the seeker system further includes a target region. After receiving the first image, the ground workstation determines a transformation matrix related to the first image by processing the first image, corrects the first image by using the determined transformation matrix, performs super-resolution reconstruction on the corrected first image to obtain a second super-resolution image, and performs target detection on the second super-resolution image to determine a target area, so that the ground workstation can return the target area to the seeker system through feedback information, and the seeker system can directly determine the target area according to the feedback information without performing target detection on the image, thereby saving resources of the seeker system.
In practical application, as the first image is shot at a longer distance, the target is smaller in the first image, the characteristics are not obvious, the target area is positioned and tracked based on the first image, a certain difficulty exists, and the accuracy is not high. Based on this, in the case of the ground workstation coordination, the seeker system still needs to correct and reconstruct the first image with super resolution, but in this embodiment, the transformation matrix is used to correct the target area in the first image, instead of correcting the whole first image, so as to obtain a second image, where the second image obtained here is the corrected image of the target area.
In this embodiment, the target area in the first image is corrected, instead of correcting the entire first image, so that resources of the seeker system can be saved to a certain extent, that is, in the process that the seeker alone performs target tracking, the obtained second image is an image corrected by the entire first image, and in the case that the seeker is matched with the ground workstation, the obtained second image is an image corrected by the target area.
It should be noted that, when the second image is a corrected image of the target area, in the process of performing super-resolution reconstruction, super-resolution reconstruction is performed on the pixels corresponding to the target area, that is, the first super-resolution image obtained at this time is a super-resolution image related to the target area, so that resources of the guide head system can be saved to a certain extent, that is, in the process of performing target tracking by the guide head alone, the first super-resolution image obtained is a super-resolution image related to the whole first image, and in the case of matching the guide head with the ground workstation, the first super-resolution image obtained is a super-resolution image related to the target area correction.
In the above embodiment, the seeker system can eliminate the shake caused by the aircraft by correcting the acquired image, so that the image is more stable and smooth. And then carrying out super-resolution reconstruction on the corrected image, and tracking the target area on the super-resolution image, wherein the super-resolution image can record more characteristics and details, so that the characteristics of the target area are enhanced, and the tracking precision of the target area can be improved.
In one embodiment, the workflow of the ground station is described in detail in connection with fig. 4 in the context of a seeker system for use with the ground station. Fig. 4 shows a flow chart of a target tracking method based on super-resolution images.
Step S401 receives a first image transmitted by the seeker system and determines a transformation matrix for the first image.
Here, where a ground station engagement is required, the leader system will transmit the acquired first image to the ground station. The ground station is capable of receiving the first image and processing the first image to determine a transformation matrix for the first image.
The ground workstation can determine the transformation matrix of the first image in a characteristic point matching mode, the characteristic point matching method extracts characteristic points from the two images and matches the characteristic points, the characteristic points can be used for calculating the transformation matrix, and the common characteristic points comprise angular points, edge points and the like. The specific flow of determining the transformation matrix of the first image may refer to steps S201 to S204.
Step S402, correcting the first image according to the transformation matrix to obtain a fourth image.
Here, the determined conversion matrix is applied to the first image to correct the first image, so as to obtain a fourth image, where the fourth image is a corrected image, and in this embodiment, the entire first image is corrected, and based on this, the obtained second image is a corrected image of the entire first image, so that the influence of the aircraft shake on the image can be eliminated.
Step S403, performing super-resolution reconstruction on the fourth image to obtain a second super-resolution image corresponding to the fourth image.
Super-resolution reconstruction here refers to the process of reconstructing a high resolution image from one or more low resolution images, which is advantageous for accurate positioning and tracking of the target in image guidance.
After determining the fourth image, performing super-resolution reconstruction on the fourth image, and increasing details and definition of the fourth image, thereby obtaining a high-resolution image, wherein the super-resolution reconstruction can be realized in various ways, including an interpolation-based method, a depth learning-based method, and the like.
Step S404, determining a target area in the second super-resolution image, and packaging the target area into feedback information and transmitting the feedback information to the seeker system.
Here, since the second super-resolution image has higher definition and more image details, the target region can be accurately located in the second super-resolution image, wherein the operator of the ground workstation can also determine the target region directly in the second super-resolution image, that is, by manually selecting the target region. In practical applications, the target detection algorithm may also be used to perform target detection on the second super-resolution image, so as to determine the target area in the second super-resolution image, and the specific flow may refer to steps S301 to S303.
After the ground workstation determines the target area, the ground workstation packages the target area into feedback information and transmits the feedback information to the seeker system, wherein the feedback information comprises position information and the like of the target area, and therefore the seeker system can unfold and track the target area.
In the embodiment, the projectile body has high running speed and high target change, the response speed and psychological diathesis of the staff of the ground working station are tested greatly, the ground working station carries out super-resolution reconstruction on the first image, the target characteristics are enhanced, the target size is increased, the staff can see the target area clearly, the pressure of the staff can be further reduced by using the target detection algorithm to detect the target area in the image, and then the influence of the staff on the accuracy of the target area is reduced.
It should be noted that, the process flow of the first image by the ground workstation is similar to the process flow of the first image by the seeker system alone, that is, the process flow of the first image by the seeker system alone is realized by the ground workstation, so that the resources of the seeker system are saved.
In the above embodiment, the ground workstation performs correction processing on the received first image, so that the influence of the aircraft shake on the image can be eliminated, super-resolution reconstruction is performed on the corrected image, and the quality and detail of the image are enhanced, so that the accuracy of determining the target area can be improved, and further the tracking accuracy of the target area is improved.
In one embodiment, a super-resolution image-based target tracking apparatus is provided, and referring to fig. 5, the super-resolution image-based target tracking apparatus 500 may include: a determination module 501, a correction module 502, a reconstruction module 503, and a tracking module 504.
Wherein, the determining module 501 is configured to acquire a first image and determine a transformation matrix related to the first image; the correction module 502 is configured to correct the first image according to a transformation matrix to obtain a second image; a reconstruction module 503, configured to perform super-resolution reconstruction on the second image, to obtain a first super-resolution image corresponding to the second image; and the tracking module 504 is configured to track the target area in the first super-resolution image.
In one embodiment, the determining module 501 is specifically configured to extract corner points of the first image and the third image, and obtain a first corner point set corresponding to the first image and a second corner point set corresponding to the third image respectively; the third image characterizes the last frame of image of the first image; the first corner set and the second corner set are used for recording the positions of the corner points; extracting a first direction gradient histogram feature corresponding to each corner in the first corner set and a second direction gradient histogram feature corresponding to each corner in the second corner set; matching the first corner set and the second corner set according to the first direction gradient histogram feature and the second direction gradient histogram feature to obtain a third corner set; the third corner set is used for recording the matched corner points; and according to the third angle point set, calculating to obtain a transformation matrix.
In one embodiment, the tracking module 504 is specifically configured to perform target detection on each region in the first super-resolution image based on the trained target detection model, so as to obtain a first target detection result of each region in the first super-resolution image; wherein each region in the first super-resolution image is divided by block sliding; combining the first target detection results of all areas in the first super-resolution image to obtain a second target detection result related to the first super-resolution image; determining a target area in the first super-resolution image according to the second target detection result; tracking the target area based on a correlation filtering method.
In one embodiment, tracking module 504 is specifically configured for any one of the following:
determining a region with highest confidence in the second target detection result as a target region;
Determining a region with the largest overlapping area in the second target detection result and the third target detection result as a target region; the third target detection result represents the target detection result of the previous frame of image of the first image;
Transmitting the second target detection result to a ground workstation, receiving a feedback instruction of the ground workstation about the second target detection result, and determining a target area according to the feedback instruction; the feedback instruction is used for selecting a target area in the second target detection result;
determining a region with the size matched with the set size in the second target detection result as a target region; the set size is estimated based on the eye distance.
In one embodiment, the determining module 501 is specifically configured to transmit the first image to a ground workstation and receive feedback information returned by the ground workstation; the feedback information includes a transformation matrix; the transformation matrix is determined for the ground station based on the first image.
In one embodiment, the feedback information further includes a target area, and the correction module 502 is specifically configured to correct the target area in the first image according to the transformation matrix, so as to obtain a second image corresponding to the target area; the target area is determined by target detection of the second super-resolution image by the ground workstation; the second super-resolution image is obtained by performing correction processing and super-resolution reconstruction processing on the first image based on the ground workstation.
For specific limitations on the target tracking apparatus based on the super-resolution image, reference may be made to the above limitations on the target tracking method based on the super-resolution image, and detailed descriptions thereof are omitted herein. The above-described modules in the super-resolution image-based target tracking apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a seeker system is provided that includes a memory storing a computer program and a processor that when executing the computer program implements a super-resolution image-based target tracking method.
In one embodiment, a computer storage medium having a computer program stored thereon, which when executed by a processor, implements a super-resolution image-based target tracking method.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
Claims (8)
1. The target tracking method based on the super-resolution image is characterized by being applied to a seeker system and comprising the following steps of:
Acquiring a first image and determining a transformation matrix for the first image;
correcting the first image according to the transformation matrix to obtain a second image;
performing super-resolution reconstruction on the second image to obtain a first super-resolution image corresponding to the second image;
Tracking a target area in the first super-resolution image; wherein,
The transformation matrix is used for registering and correcting the first image;
The acquiring a first image and determining a transformation matrix for the first image includes: transmitting the first image to a ground workstation and receiving feedback information returned by the ground workstation; the feedback information includes the transformation matrix; the transformation matrix is determined for the ground station based on the first image;
The feedback information further includes the target area, and the correcting the first image according to the transformation matrix to obtain a second image includes: correcting the target area in the first image according to the transformation matrix to obtain the second image corresponding to the target area; the target area is determined by target detection of the second super-resolution image by the ground workstation; the second super-resolution image is obtained by performing correction processing and super-resolution reconstruction processing on the first image based on the ground workstation.
2. The super-resolution image based object tracking method according to claim 1, wherein the determining a transformation matrix for the first image comprises:
extracting angular points of the first image and the third image to respectively obtain a first angular point set corresponding to the first image and a second angular point set corresponding to the third image; the third image characterizes a previous frame of the first image; the first corner set and the second corner set are used for recording the positions of the corner points;
Extracting a first direction gradient histogram feature corresponding to each corner in the first corner set and extracting a second direction gradient histogram feature corresponding to each corner in the second corner set;
matching the first corner set and the second corner set according to the first direction gradient histogram feature and the second direction gradient histogram feature to obtain a third corner set; the third corner set is used for recording matched corner points;
And calculating to obtain the transformation matrix according to the third angle point set.
3. The super-resolution image based target tracking method as claimed in claim 1, wherein the tracking the target region in the first super-resolution image comprises:
Performing target detection on each region in the first super-resolution image based on the trained target detection model to obtain a first target detection result of each region in the first super-resolution image; wherein each region in the first super-resolution image is divided by block sliding;
combining the first target detection results of all areas in the first super-resolution image to obtain a second target detection result related to the first super-resolution image;
determining the target area in the first super-resolution image according to the second target detection result;
And tracking the target area based on a correlation filtering method.
4. The super-resolution image-based object tracking method as claimed in claim 3, wherein said determining the object region in the first super-resolution image based on the second object detection result includes any one of:
determining a region with highest confidence in the second target detection result as the target region;
Determining a region with the largest overlapping area in the second target detection result and the third target detection result as the target region; the third target detection result represents a target detection result of a previous frame image of the first image;
Transmitting the second target detection result to a ground workstation, receiving a feedback instruction of the ground workstation about the second target detection result, and determining the target area according to the feedback instruction; the feedback instruction is used for selecting a target area from the second target detection result;
determining a region with the size matched with the set size in the second target detection result as the target region; the set size is estimated based on the eye distance.
5. A target tracking method based on super-resolution images, which is applied to a ground workstation, the method comprising:
receiving a first image transmitted by a seeker system and determining a transformation matrix for the first image; the transformation matrix is used for registering and correcting the first image;
Correcting the first image according to the transformation matrix to obtain a fourth image;
performing super-resolution reconstruction on the fourth image to obtain a second super-resolution image corresponding to the fourth image;
and determining a target area in the second super-resolution image, packaging the transformation matrix and the target area into target information, and transmitting the target information to the seeker system so that the seeker system corrects and tracks the target area based on the transformation matrix.
6. A super-resolution image-based object tracking apparatus for use in a seeker system, the apparatus comprising:
a determination module for acquiring a first image and determining a transformation matrix for the first image;
the correction module is used for correcting the first image according to the transformation matrix to obtain a second image;
The reconstruction module is used for carrying out super-resolution reconstruction on the second image to obtain a first super-resolution image corresponding to the second image;
the tracking module is used for tracking the target area in the first super-resolution image; wherein,
The transformation matrix is used for registering and correcting the first image;
The determining module is specifically configured to transmit the first image to a ground workstation and receive feedback information returned by the ground workstation; the feedback information includes the transformation matrix; the transformation matrix is determined for the ground station based on the first image;
The correction module is specifically configured to correct the target area in the first image according to the transformation matrix, so as to obtain the second image corresponding to the target area; the target area is determined by target detection of the second super-resolution image by the ground workstation; the second super-resolution image is obtained by performing correction processing and super-resolution reconstruction processing on the first image based on the ground workstation.
7. A seeker system comprising a memory storing a computer program and a processor, wherein the processor when executing the computer program performs the steps of the super-resolution image-based object tracking method of any one of claims 1 to 4.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the super-resolution image-based object tracking method according to any one of claims 1 to 4 or claim 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410176096.0A CN117726656B (en) | 2024-02-08 | 2024-02-08 | Target tracking method, device, system and medium based on super-resolution image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410176096.0A CN117726656B (en) | 2024-02-08 | 2024-02-08 | Target tracking method, device, system and medium based on super-resolution image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117726656A CN117726656A (en) | 2024-03-19 |
CN117726656B true CN117726656B (en) | 2024-06-04 |
Family
ID=90203873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410176096.0A Active CN117726656B (en) | 2024-02-08 | 2024-02-08 | Target tracking method, device, system and medium based on super-resolution image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117726656B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101270993A (en) * | 2007-12-12 | 2008-09-24 | 北京航空航天大学 | Remote high-precision independent combined navigation locating method |
JP2016161194A (en) * | 2015-02-27 | 2016-09-05 | 三菱重工業株式会社 | Missile guidance system, missile, missile guiding method and guiding control program |
CN106570886A (en) * | 2016-10-27 | 2017-04-19 | 南京航空航天大学 | Target tracking method based on super-resolution reconstruction |
CN110248059A (en) * | 2019-05-23 | 2019-09-17 | 杭州他若信息科技有限公司 | A kind of object tracking device and method |
CN110298790A (en) * | 2019-06-28 | 2019-10-01 | 北京金山云网络技术有限公司 | A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding |
CN110782483A (en) * | 2019-10-23 | 2020-02-11 | 山东大学 | Multi-view multi-target tracking method and system based on distributed camera network |
CN112037252A (en) * | 2020-08-04 | 2020-12-04 | 深圳技术大学 | Eagle eye vision-based target tracking method and system |
WO2021031069A1 (en) * | 2019-08-19 | 2021-02-25 | 深圳先进技术研究院 | Image reconstruction method and apparatus |
CN112764052A (en) * | 2020-12-25 | 2021-05-07 | 中国人民解放军32181部队 | Air defense missile flight monitoring system |
CN113435384A (en) * | 2021-07-07 | 2021-09-24 | 中国人民解放军国防科技大学 | Target detection method, device and equipment for medium-low resolution optical remote sensing image |
CN116363168A (en) * | 2023-02-28 | 2023-06-30 | 西安电子科技大学 | Remote sensing video target tracking method and system based on super-resolution network |
KR102559721B1 (en) * | 2022-11-16 | 2023-07-26 | 주식회사 지디에프랩 | Control method of electronic apparatus for selectively restore images according to field of view of user |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI420906B (en) * | 2010-10-13 | 2013-12-21 | Ind Tech Res Inst | Tracking system and method for regions of interest and computer program product thereof |
US8723959B2 (en) * | 2011-03-31 | 2014-05-13 | DigitalOptics Corporation Europe Limited | Face and other object tracking in off-center peripheral regions for nonlinear lens geometries |
US11620733B2 (en) * | 2013-03-13 | 2023-04-04 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
CN107895345B (en) * | 2017-11-29 | 2020-05-26 | 浙江大华技术股份有限公司 | Method and device for improving resolution of face image |
CN110827200B (en) * | 2019-11-04 | 2023-04-07 | Oppo广东移动通信有限公司 | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal |
CN115147723B (en) * | 2022-07-11 | 2023-05-09 | 武汉理工大学 | Inland ship identification and ranging method, inland ship identification and ranging system, medium, equipment and terminal |
-
2024
- 2024-02-08 CN CN202410176096.0A patent/CN117726656B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101270993A (en) * | 2007-12-12 | 2008-09-24 | 北京航空航天大学 | Remote high-precision independent combined navigation locating method |
JP2016161194A (en) * | 2015-02-27 | 2016-09-05 | 三菱重工業株式会社 | Missile guidance system, missile, missile guiding method and guiding control program |
CN106570886A (en) * | 2016-10-27 | 2017-04-19 | 南京航空航天大学 | Target tracking method based on super-resolution reconstruction |
CN110248059A (en) * | 2019-05-23 | 2019-09-17 | 杭州他若信息科技有限公司 | A kind of object tracking device and method |
CN110298790A (en) * | 2019-06-28 | 2019-10-01 | 北京金山云网络技术有限公司 | A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding |
WO2021031069A1 (en) * | 2019-08-19 | 2021-02-25 | 深圳先进技术研究院 | Image reconstruction method and apparatus |
CN110782483A (en) * | 2019-10-23 | 2020-02-11 | 山东大学 | Multi-view multi-target tracking method and system based on distributed camera network |
CN112037252A (en) * | 2020-08-04 | 2020-12-04 | 深圳技术大学 | Eagle eye vision-based target tracking method and system |
CN112764052A (en) * | 2020-12-25 | 2021-05-07 | 中国人民解放军32181部队 | Air defense missile flight monitoring system |
CN113435384A (en) * | 2021-07-07 | 2021-09-24 | 中国人民解放军国防科技大学 | Target detection method, device and equipment for medium-low resolution optical remote sensing image |
KR102559721B1 (en) * | 2022-11-16 | 2023-07-26 | 주식회사 지디에프랩 | Control method of electronic apparatus for selectively restore images according to field of view of user |
CN116363168A (en) * | 2023-02-28 | 2023-06-30 | 西安电子科技大学 | Remote sensing video target tracking method and system based on super-resolution network |
Non-Patent Citations (4)
Title |
---|
《基于多特征融合的空对地目标检测和追踪方法》;张艳国李擎于飞刘恒志;《电光与控制》;20181203;第26卷(第06期);6-11 * |
《基于超分辨率算法的军用微型无人机视频回传技术》;余思琛;成春晟;《指挥信息系统与技术》;20221228;第13卷(第06期);23-28 * |
图像匹配综合实验与仿真系统研究;杨小冈;左森;黄先祥;郭风华;夏克寒;;系统仿真学报;20100608(06);38-42 * |
近空目标图像定位算法的研究;王立;雷斌;王健宁;白福;;计算机与数字工程;20161120(11);55-59 * |
Also Published As
Publication number | Publication date |
---|---|
CN117726656A (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110555901B (en) | Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes | |
US8903177B2 (en) | Method, computer program and device for hybrid tracking of real-time representations of objects in a sequence | |
US11205276B2 (en) | Object tracking method, object tracking device, electronic device and storage medium | |
US9147260B2 (en) | Detection and tracking of moving objects | |
CN108875730B (en) | Deep learning sample collection method, device, equipment and storage medium | |
CN108955718A (en) | A kind of visual odometry and its localization method, robot and storage medium | |
Klippenstein et al. | Quantitative evaluation of feature extractors for visual slam | |
WO2015017539A1 (en) | Rolling sequential bundle adjustment | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
CN111383252B (en) | Multi-camera target tracking method, system, device and storage medium | |
do Monte Lima et al. | Model based markerless 3D tracking applied to augmented reality | |
Lisanti et al. | Continuous localization and mapping of a pan–tilt–zoom camera for wide area tracking | |
Romero-Ramirez et al. | Tracking fiducial markers with discriminative correlation filters | |
CN113763466B (en) | Loop detection method and device, electronic equipment and storage medium | |
Streiff et al. | 3D3L: Deep learned 3D keypoint detection and description for LiDARs | |
CN112435223A (en) | Target detection method, device and storage medium | |
Attard et al. | Image mosaicing of tunnel wall images using high level features | |
CN111105436B (en) | Target tracking method, computer device and storage medium | |
CN117726656B (en) | Target tracking method, device, system and medium based on super-resolution image | |
Araar et al. | PDCAT: a framework for fast, robust, and occlusion resilient fiducial marker tracking | |
CN117196954A (en) | Weak texture curved surface image stitching method and device for aircraft skin | |
CN116128919A (en) | Multi-temporal image abnormal target detection method and system based on polar constraint | |
CN113255405B (en) | Parking space line identification method and system, parking space line identification equipment and storage medium | |
CN115294358A (en) | Feature point extraction method and device, computer equipment and readable storage medium | |
CN109242894B (en) | Image alignment method and system based on mobile least square method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |