Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm
<p>The whole structure of our work. The Bilateral adversarial patch-generating network is our designed network used to generate adversarial patches for the template and search region. The three loss functions aim to misguide the network to give wrong predictions.</p> "> Figure 2
<p>Bilateral adversarial patch generating network. In this paper, channel c is set to 64. The network consists of three parts: the backbone for feature extraction, the search region branch for generating the patch for the search region, and the template branch for generating the patch for the template. The right part of the figure shows the inner structure of the corresponding models in the left part of the figure.</p> "> Figure 3
<p>DeFocus operation. The colors in the left part represent different channels of the convolutional layer’s output. These channels are sequentially mapped to fill the corresponding colored locations in the right part’s feature map. This process enables the transformation of a feature map with dimensions w × h × 4c to a feature map with dimensions 2w × 2h × c.</p> "> Figure 4
<p>Label assignment. The black square denotes an anchor box with a value of zero, while the white square represents an anchor box with a value of one. The red rectangle represents the ground truth.</p> "> Figure 5
<p>Results on UAV123 dataset. In all the subfigures, the left subfigure shows the success rate plot of the One Pass Evaluation, while the right subfigure displays the precision plot of the One Pass Evaluation. The “Clean-xxx” represents the testing results on the clean test dataset, while the “Attack-xxx” indicates the results on the patched test dataset.</p> "> Figure 5 Cont.
<p>Results on UAV123 dataset. In all the subfigures, the left subfigure shows the success rate plot of the One Pass Evaluation, while the right subfigure displays the precision plot of the One Pass Evaluation. The “Clean-xxx” represents the testing results on the clean test dataset, while the “Attack-xxx” indicates the results on the patched test dataset.</p> "> Figure 6
<p>Results on UAVDT dataset. In all the subfigures, the left subfigure shows the success rate plot of the One Pass Evaluation, while the right subfigure displays the precision plot of the One Pass Evaluation. The “Clean-xxx” represents the testing results on the clean test dataset, while the “Attack-xxx” indicates the results on the patched test dataset.</p> "> Figure 6 Cont.
<p>Results on UAVDT dataset. In all the subfigures, the left subfigure shows the success rate plot of the One Pass Evaluation, while the right subfigure displays the precision plot of the One Pass Evaluation. The “Clean-xxx” represents the testing results on the clean test dataset, while the “Attack-xxx” indicates the results on the patched test dataset.</p> "> Figure 7
<p>Some of the visualized results. In all the images, the green box represents the ground truth box of the tracking object. The red, purple, and orange-yellow boxes depict the predicted boxes of the tracking object after the BAPGNet, SA, and UEN-P algorithm attacks, respectively. The light blue rectangle in the upper right corner of the image indicates the frame of the video.</p> ">
Abstract
:1. Introduction
- We propose a novel approach called the Bilateral Adversarial Patch Generating Network (BAPGNet), which effectively incorporates template information into the process of generating adversarial patches. Our network generates adversarial images for both the template and search areas, thereby amplifying the discrepancy between them. Additionally, we address the issue of size disparity between the template and search regions by introducing the Focus and DeFocus structures.
- To attack the Region Proposal Network (RPN) within the tracker, we develop two loss functions: the Adversarial Object Loss and the Adversarial Regression Loss. These loss functions manipulate the bounding box, causing it to deviate and contract from the actual tracking target. This deceptive manipulation aims to mislead the tracking algorithm.
- We introduce a novel metric to evaluate the adversarial ability of the patches. By associating the patch’s relative size with the attacking performance, our metric assesses the patch’s adversarial ability and attacking performance.
2. Related Work
2.1. Siamese Network-Based SOT
2.2. Adversarial Perturbation Based Attack
2.3. Adversarial Patch Based Attack
3. Methodology
3.1. Brief Description of SiamRPN++
3.2. Patch Generating Network Structure
3.3. Loss Function
- Introducing contrasts in the extracted features of the template and the search region before cross-correlation. These contrasts make it harder for the network to match the features of the template and the search region.
- Leading the classification branch of the RPN to give the background a higher score than the tracking object. This approach makes the network more likely to identify background regions as potential targets.
- Introducing deviations and shrink in the bounding box of the tracking object within the search region. This approach makes it more difficult for the network to precisely locate the object.
4. Metrics for Evaluating Adversarial Patch
- The larger the patch size, the weaker the PA, and vice versa. This condition implies that as the size of the patch increases, the adversarial nature of the patch decreases. Larger patches may lead to a significant discrepancy between the adversarial and original samples, even completely covering the objects, which reduces the effectiveness of the attack;
- The higher the attacking performance, the stronger the PA, and vice versa. This condition indicates that as the attacking performance improves (i.e., the victim algorithm’s performance drops significantly), the adversarial nature of the patch increases. A stronger attack results in a higher PA value.
- Firstly, we obtain the Success plot (or Precision plot) for the clean, non-adversarial data. This plot represents the performance of the tracker in terms of success rate (or precision) at different overlap thresholds (or distance thresholds);
- We calculate the area under the Success plot (or Precision plot) curve, which is denoted as ASRc (or APRc). This average success rate (or precision rate) represents the tracker’s performance on the clean data;
- Next, we apply an adversarial attack to the input data and obtain a new Success plot (or Precision plot) to measure the tracker’s performance on the perturbed data. We calculate the area under this curve, denoted as ASRa (or APRa);
- Finally, DRTSR (DRTPR) value can be calculated as follows:
5. Experiment
5.1. Datasets
5.2. Experimental Setup
5.3. Evaluation of the Attacking Performance
5.4. Influence of the Pre-Training Process
- Construct a feature extraction network that is identical to a single branch of the Siamese network in the tracking network;
- Flatten the last convolutional layer of the feature extraction network;
- Added a fully connected layer consisting of 1000 neurons;
- Apply SoftMax operation to normalize the network output;
- Train the network using the ImageNet1k dataset with the cross-entropy loss function;
- The pre-trained network was used to initialize the parameters of the Siamese network in the tracking network (both branches shared the same parameters);
- Fix the parameters of the Siamese network, and train the network for 200 epochs on the tracking dataset;
- Unlocking the parameters of the Siamese network and fine-tuning them for 50 epochs to generate the final network.
5.5. Ablation Experiment
5.6. Comparing with Other Attacking Algorithms
- The proposed network utilizes the backbone of the YOLOv5 network, which presents powerful feature extraction ability and provides high efficiency. The proven YOLOv5 backbone enables the proposed network to achieve strong feature extraction ability and high efficiency.
- The proposed algorithm is patching the adversarial patch to both the search region and template, maximizing their variance. Applying the adversarial patch to both the search region and template results in the greatest difference, amplifying the attack’s effectiveness.
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Bai, C.; Gong, Y.; Cao, X. Pedestrian Tracking and Trajectory Analysis for Security Monitoring. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; IEEE: New York, NY, USA, 2020. [Google Scholar]
- Emami, A.; Dadgostar, F.; Bigdeli, A.; Lovell, B.C. Role of spatiotemporal oriented energy features for robust visual tracking in video surveillance. In Proceedings of the 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance, Beijing, China, 18–21 September 2012; IEEE: New York, NY, USA, 2012. [Google Scholar]
- Gao, M.; Jin, L.; Jiang, Y.; Guo, B. Manifold Siamese network: A novel visual tracking convnet for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1612–1623. [Google Scholar] [CrossRef]
- Robin, C.; Lacroix, S. Multi-robot target detection and tracking: Taxonomy and survey. Auton. Robot. 2016, 40, 729–760. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Doi, K.; Iwasaki, A.; Xu, G. Unsupervised domain adaptation of high-resolution aerial images via correlation alignment and self training. IEEE Geosci. Remote Sens. Lett. 2020, 18, 746–750. [Google Scholar] [CrossRef]
- Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature verification using a “siamese” time delay neural network. Adv. Neural Inf. Process. Syst. 1993, 6, 737–744. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Song, Y.; Ma, C.; Wu, X.; Gong, L.; Bao, L.; Zuo, W.; Shen, C.; Lau, R.W.; Yang, M.-H. Vital: Visual tracking via adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Wang, X.; Li, C.; Luo, B.; Tang, J. Sint++: Robust visual tracking via adversarial positive instance generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Rasol, J.; Xu, Y.; Zhang, Z.; Zhang, F.; Feng, W.; Dong, L.; Hui, T.; Tao, C. An Adaptive Adversarial Patch-Generating Algorithm for Defending against the Intelligent Low, Slow, and Small Target. Remote Sens. 2023, 15, 1439. [Google Scholar] [CrossRef]
- Wiyatno, R.R.; Xu, A. Physical adversarial textures that fool visual object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Li, Z.; Shi, Y.; Gao, J.; Wang, S.; Li, B.; Liang, P.; Hu, W. A simple and strong baseline for universal targeted attacks on siamese visual tracking. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3880–3894. [Google Scholar] [CrossRef]
- Chen, X.; Fu, C.; Zheng, F.; Zhao, Y.; Li, H.; Luo, P.; Qi, G.-J. A Unified Multi-Scenario Attacking Network for Visual Object Tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021. [Google Scholar]
- Ding, L.; Wang, Y.; Yuan, K.; Jiang, M.; Wang, P.; Huang, H.; Wang, Z.J. Towards universal physical attacks on single object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021. [Google Scholar]
- Threet, M.; Busho, C.; Harguess, J.; Jutras, M.; Lape, N.; Leary, S.; Manville, K.; Tan, M.; Ward, C. Physical adversarial attacks in simulated environments. In Proceedings of the 2021 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 12–14 October 2021; IEEE: New York, NY, USA, 2021. [Google Scholar]
- Tao, R.; Gavves, E.; Smeulders, A.W. Siamese instance search for tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016. [Google Scholar]
- Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the Computer Vision–ECCV 2016 Workshops, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016; Proceedings, Part II 14. Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Wang, Q.; Teng, Z.; Xing, J.; Gao, J.; Hu, W.; Maybank, S. Learning attentions: Residual attentional siamese network for high performance online visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Li, B.; Yan, J.; Wu, W.; Zhu, Z.; Hu, X. High performance visual tracking with siamese region proposal network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Valmadre, J.; Bertinetto, L.; Henriques, J.; Vedaldi, A.; Torr, P.H. End-to-end representation learning for correlation filter based tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; Yan, J. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Zhang, Z.; Peng, H. Deeper and wider siamese networks for real-time visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Wang, Q.; Zhang, L.; Bertinetto, L.; Hu, W.; Torr, P.H. Fast online object tracking and segmentation: A unifying approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Zhu, Z.; Wang, Q.; Li, B.; Wu, W.; Yan, J.; Hu, W. Distractor-aware siamese networks for visual object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Cao, Z.; Fu, C.; Ye, J.; Li, B.; Li, Y. SiamAPN++: Siamese attentional aggregation network for real-time UAV tracking. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
- Wang, X.; Chen, Z.; Tang, J.; Luo, B.; Wang, Y.; Tian, Y.; Wu, F. Dynamic attention guided multi-trajectory analysis for single object tracking. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4895–4908. [Google Scholar] [CrossRef]
- Yan, B.; Wang, D.; Lu, H.; Yang, X. Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Liang, S.; Wei, X.; Yao, S.; Cao, X. Efficient adversarial attacks for visual object tracking. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part XXVI 16. [Google Scholar]
- Chen, X.; Yan, X.; Zheng, F.; Jiang, Y.; Xia, S.-T.; Zhao, Y.; Ji, R. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Guo, Q.; Xie, X.; Juefei-Xu, F.; Ma, L.; Li, Z.; Xue, W.; Feng, W.; Liu, Y. Spark: Spatial-aware online incremental attack against visual tracking. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXV 16. Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Jia, S.; Song, Y.; Ma, C.; Yang, X. Iou attack: Towards temporally coherent black-box adversarial attack for visual object tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Yan, X.; Chen, X.; Jiang, Y.; Xia, S.-T.; Zhao, Y.; Zheng, F. Hijacking tracker: A powerful adversarial attack on visual tracking. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; IEEE: New York, NY, USA, 2020. [Google Scholar]
- Liu, S.; Chen, Z.; Li, W.; Zhu, J.; Wang, J.; Zhang, W.; Gan, Z. Efficient universal shuffle attack for visual object tracking. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; IEEE: New York, NY, USA, 2022. [Google Scholar]
- Suttapak, W.; Zhang, J.; Zhang, L. Diminishing-feature attack: The adversarial infiltration on visual tracking. Neurocomputing 2022, 509, 21–33. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Jocher, G.; Stoken, A.; Borovec, J.; Chaurasia, A.; Changyu, L.; Hogan, A.; Hajek, J.; Diaconu, L.; Kwon, Y.; Defretin, Y. ultralytics/yolov5: v5.0—YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations. Zenodo. 2021. Available online: https://github.com/ultralytics/yolov5 (accessed on 12 June 2022).
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM Sigsac Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016. [Google Scholar]
Dataset | Victim Algorithm | ASRc | ASRa | DRTSR |
---|---|---|---|---|
UAV123 | SiamRPN++ | 58.8% | 18% | 69.4% |
DaSiamRPN | 53.5% | 17.2% | 67.8% | |
SiamAPN++ | 56.4% | 21.8% | 61.3% | |
UAVDT | SiamRPN++ | 52.9% | 18.3% | 65.3% |
DaSiamRPN | 50% | 17.7% | 64.6% | |
SiamAPN++ | 52.2% | 22.8% | 56.3% |
Dataset | Victim Algorithm | APRc | APRa | DRTPR |
---|---|---|---|---|
UAV123 | SiamRPN++ | 73.9% | 30.4% | 58.9% |
DaSiamRPN | 68.4% | 34% | 50.3% | |
SiamAPN++ | 69.3% | 38.2% | 45% | |
UAVDT | SiamRPN++ | 68.7% | 34.7% | 49.5% |
DaSiamRPN | 65.5% | 33.9% | 48.2% | |
SiamAPN++ | 66.7% | 39.5% | 40.8% |
Dataset | Attacking Model | Victim Algorithm | ASRc | ASRa | DRTSR |
---|---|---|---|---|---|
UAV123 | Train For SiamRPN++ | DaSiamRPN | 53.5% | 28.4% | 47% |
SiamAPN++ | 56.4% | 30% | 46.7% | ||
Train For DaSiamRPN | SiamRPN++ | 58.8% | 25.2% | 57.2% | |
SiamAPN++ | 56.4% | 28.8% | 49.3% | ||
Train For SiamAPN++ | SiamRPN++ | 58.8% | 32.3% | 45% | |
DaSiamRPN | 53.5% | 27.5% | 48.6% | ||
UAVDT | Train For SiamRPN++ | DaSiamRPN | 50% | 29.5% | 41% |
SiamAPN++ | 52.2% | 30.3% | 42% | ||
Train For DaSiamRPN | SiamRPN++ | 52.9% | 27.1% | 48.7% | |
SiamAPN++ | 52.2% | 30.9% | 69.8% | ||
Train For SiamAPN++ | SiamRPN++ | 52.9% | 28.5% | 46.1% | |
DaSiamRPN | 50% | 28.8% | 42.4% |
Dataset | Attacking Model | Victim Algorithm | APRc | APRa | DRTPR |
---|---|---|---|---|---|
UAV123 | Train For SiamRPN++ | DaSiamRPN | 68.4% | 44.1% | 35.6% |
SiamAPN++ | 69.3% | 47.6% | 31.4% | ||
Train For DaSiamRPN | SiamRPN++ | 73.9% | 44.3% | 40.1% | |
SiamAPN++ | 69.3% | 51.9% | 25.2% | ||
Train For SiamAPN++ | SiamRPN++ | 73.9% | 47.8% | 35.3% | |
DaSiamRPN | 68.4% | 46.2% | 32.5% | ||
UAVDT | Train For SiamRPN++ | DaSiamRPN | 65.4% | 46.1% | 29.4% |
SiamAPN++ | 66.7% | 51.2% | 23.3% | ||
Train For DaSiamRPN | SiamRPN++ | 68.7% | 46.5% | 32.3% | |
SiamAPN++ | 66.7% | 53.6% | 19.7% | ||
Train For SiamAPN++ | SiamRPN++ | 68.7% | 48.5% | 29.3% | |
DaSiamRPN | 65.4% | 49.2% | 24.7% |
Dataset | Victim Algorithm | ASRc | ASRa | DRTSR |
---|---|---|---|---|
UAV123 | Pre-SiamRPN++ | 59.2% | 18.6% | 68.6% |
Pre-DaSiamRPN | 53.1% | 16.9% | 68.2% | |
Pre-SiamAPN++ | 55.8% | 21.9% | 60.8% | |
UAVDT | Pre-SiamRPN++ | 53.3% | 18.1% | 66% |
Pre-DaSiamRPN | 49.6% | 17.8% | 64.1% | |
Pre-SiamAPN++ | 51.7% | 23% | 55.5% |
Dataset | Victim Algorithm | APRc | APRa | DRTPR |
---|---|---|---|---|
UAV123 | Pre-SiamRPN++ | 74.5% | 31.1% | 58.3% |
Pre-DaSiamRPN | 67.8% | 33.2% | 51% | |
Pre-SiamAPN++ | 68.9% | 38.4% | 44.3% | |
UAVDT | Pre-SiamRPN++ | 69.7% | 34.5% | 50.5% |
Pre-DaSiamRPN | 64.9% | 33.8% | 47.9% | |
Pre-SiamAPN++ | 66.1% | 40.1% | 49.3% |
Dataset | Attacking Network | ASRc | ASRa | DRTSR | PAsr |
---|---|---|---|---|---|
UAV123 | BAPGNet | 58.8% | 18% | 69.4% | 3.47 |
BAPGNet1 | 22.5% | 61.7% | 3.085 | ||
BAPGNet2 | 28.1% | 52.2% | 2.61 | ||
UAVDT | BAPGNet | 52.9% | 18.3% | 65.3% | 3.265 |
BAPGNet1 | 23.2% | 56.1% | 2.805 | ||
BAPGNet2 | 29.3% | 44.6% | 2.23 |
Dataset | Attacking Network | APRc | APRa | DRTPR | PApr |
---|---|---|---|---|---|
UAV123 | BAPGNet | 73.9% | 30.4% | 58.9% | 2.945 |
BAPGNet1 | 33.5% | 50.3% | 2.515 | ||
BAPGNet2 | 37.8% | 45% | 2.25 | ||
UAVDT | BAPGNet | 68.7% | 34.7% | 49.5% | 2.475 |
BAPGNet1 | 36.6% | 48.2% | 2.41 | ||
BAPGNet2 | 40.7% | 40.8% | 2.04 |
Datasets | Attacking Network | ASRc | ASRa | DRTSR | PAsr |
---|---|---|---|---|---|
UAV123 | BAPGNet | 58.8% | 18% | 69.4% | 3.47 |
SA | 30.5% | 48.2% | 2.41 | ||
UEN-P | 26.3% | 55.3% | 2.765 | ||
UAVDT | BAPGNet | 52.9% | 18.3% | 65.3% | 3.265 |
SA | 28.4% | 46.3% | 2.315 | ||
UEN-P | 26% | 50.7% | 2.535 |
Datasets | Attacking Network | APRc | APRa | DRTPR | PApr |
---|---|---|---|---|---|
UAV123 | BAPGNet | 73.9% | 30.4% | 58.9% | 2.945 |
SA | 41% | 44.5% | 2.225 | ||
UEN-P | 37.4% | 49.4% | 2.47 | ||
UAVDT | BAPGNet | 68.7% | 34.7% | 49.5% | 2.475 |
SA | 42.4% | 38.2% | 1.91 | ||
UEN-P | 39.7% | 42.2% | 2.11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rasol, J.; Xu, Y.; Zhang, Z.; Tao, C.; Hui, T. Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm. Remote Sens. 2023, 15, 3670. https://doi.org/10.3390/rs15143670
Rasol J, Xu Y, Zhang Z, Tao C, Hui T. Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm. Remote Sensing. 2023; 15(14):3670. https://doi.org/10.3390/rs15143670
Chicago/Turabian StyleRasol, Jarhinbek, Yuelei Xu, Zhaoxiang Zhang, Chengyang Tao, and Tian Hui. 2023. "Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm" Remote Sensing 15, no. 14: 3670. https://doi.org/10.3390/rs15143670
APA StyleRasol, J., Xu, Y., Zhang, Z., Tao, C., & Hui, T. (2023). Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm. Remote Sensing, 15(14), 3670. https://doi.org/10.3390/rs15143670