Electronics and Electrical Engineering and Control

Adaptive template update-based Transformer algorithm for UAV target tracking

  • Fang LIU ,
  • Chenyang LU ,
  • Yan LU ,
  • Xin WANG
Expand
  • 1.School of Information Science and Technology,Beijing University of Technology,Beijing 100124,China
    2.Fengtai Power Supply Bureau of Beijing Power Supply Bureau,Beijing 100161,China

Received date: 2024-12-19

  Revised date: 2025-02-13

  Accepted date: 2025-04-11

  Online published: 2025-04-25

Supported by

National Natural Science Foundation of China(61171119)

Abstract

Unmanned Aerial Vehicles (UAVs) have been extensively deployed in both military and civilian applications, where target tracking plays a critical role. To address challenges such as target deformation, occlusion, scale variation, and complex environmental conditions during UAV target tracking, a adaptive template update-based Transformer algorithm for UAV target tracking is proposed. Specifically, a Transformer backbone network is constructed using an improved asymmetric attention mechanism to effectively extract image features and enhance the representation of target-related information. Furthermore, an adaptive template updating strategy based on an appearance variation coefficient is introduced. By dynamically computing this coefficient, the template is updated adaptively to improve the ability of network to cope with appearance changes of the target. Finally, the target position is determined by calculating the maximum confidence score from the response map of the search region. Experimental results demonstrate that the proposed algorithm significantly improves the accuracy of UAV target tracking and exhibits strong robustness.

Cite this article

Fang LIU , Chenyang LU , Yan LU , Xin WANG . Adaptive template update-based Transformer algorithm for UAV target tracking[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2025 , 46(16) : 331687 -331687 . DOI: 10.7527/S1000-6893.2025.31687

References

[1] 管皓, 薛向阳, 安志勇. 深度学习在视频目标跟踪中的应用进展与展望[J]. 自动化学报201642(6): 834-847.
  GUAN H, XUE X Y, AN Z Y. Advances on application of deep learning for video object tracking[J]. Acta Automatica Sinica201642(6): 834-847 (in Chinese).
[2] 刘芳, 杨安喆, 吴志威. 基于自适应Siamese网络的无人机目标跟踪算法[J]. 航空学报202041(1): 323423.
  LIU F, YANG A Z, WU Z W. Adaptive Siamese network based UAV target tracking algorithm[J]. Acta Aeronautica et Astronautica Sinica202041(1): 323423 (in Chinese).
[3] KUGARAJEEVAN J, KOKUL T, RAMANAN A, et al. Transformers in single object tracking: An experimental survey[J]. IEEE Access202311: 80297-80326.
[4] XIE F, WANG C Y, WANG G T, et al. Learning tracking representations via dual-branch fully Transformer networks[C]?∥2021 IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE Press, 2021: 2688-2697.
[5] LIN L T, FAN H, ZHANG Z P, et al. SwinTrack: A simple and strong baseline for Transformer tracking[C]?∥36th Conference on Neural Information Processing Systems. 2022: 16743-16754.
[6] YE B T, CHANG H, MA B P, et al. Joint feature learning and relation modeling for tracking: A one-stream framework[C]?∥European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 341-357.
[7] YU B, TANG M, ZHENG L Y, et al. High-performance discriminative tracking with Transformers[C]?∥2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2021: 9836-9845.
[8] WANG N, ZHOU W G, WANG J, et al. Transformer meets tracker: Exploiting temporal context for robust visual tracking[C]?∥2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 1571-1580.
[9] CHEN X, YAN B, ZHU J W, et al. High-performance Transformer tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202345(7): 8507-8523.
[10] CUI Y T, JIANG C, WANG L M, et al. MixFormer: End-to-end tracking with iterative mixed attention[C]?∥ 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2022: 13608-13618.
[11] ZHOU X Y, WANG D Q, KR?HENBüHL P. Objects as points[DB/OL]. arXiv preprint1904.07850, 2019.
[12] YAN B, PENG H W, FU J L, et al. Learning spatio-temporal Transformer for visual tracking[C]?∥2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2021: 10428-10437.
[13] CAI Y D, LIU J, TANG J, et al. Robust object modeling for visual tracking[C]?∥2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2023: 9589-9600.
[14] LAW H, DENG J. CornerNet: Detecting objects as paired keypoints[C]?∥European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2018: 765-781.
[15] REZATOFIGHI H, TSOI N, GWAK J Y, et al. Generalized intersection over union: A metric and a loss for bounding box regression[C]?∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 658-666.
[16] YAO Z W, GHOLAMI A, SHEN S, et al. ADAHESSIAN: An adaptive second order optimizer for machine learning[C]?∥Proceedings of the AAAI Conference on Artificial Intelligence. 2021: 10665-10673.
[17] WU Q Q, YANG T Y, LIU Z Q, et al. DropMAE: Masked autoencoders with spatial-attention dropout for tracking tasks[C]?∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2023: 14561-14571.
[18] DANELLJAN M, H?GER G, SHAHBAZ KHAN F, et al. Accurate scale estimation for robust visual tracking[C]?∥Proceedings of the British Machine Vision Conference 2014. Guildford: BMVA Press, 2014.
[19] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]?∥Computer Vision-ECCV 2014. Cham: Springer International Publishing, 2014: 740-755.
[20] HUANG L H, ZHAO X, HUANG K Q. GOT-10k: A large high-diversity benchmark for generic object tracking in the wild[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202143(5): 1562-1577.
[21] FAN H, LIN L T, YANG F, et al. LaSOT: A high-quality benchmark for large-scale single object tracking[C]?∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 5369-5378.
[22] MUELLER M, SMITH N, GHANEM B. A benchmark and simulator for UAV tracking[C]?∥Computer Vision-ECCV 2016. Cham: Springer International Publishing, 2016: 445-461.
[23] WU Y, LIM J, YANG M H. Online object tracking: A benchmark[C]?∥2013 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2013: 2411-2418.
[24] PAUWELS K, KRAGIC D. SimTrack: A simulation-based framework for scalable real-time object pose detection and tracking[C]?∥2015 IEEE/RSJ International Conference on Intelligent Robots and System. Piscataway: IEEE Press, 2015: 1300-1307.
[25] LI B, WU W, WANG Q, et al. SiamRPN++: Evolution of Siamese visual tracking with very deep networks[C]?∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 4277-4286.
[26] GAO S Y, ZHOU C L, MA C, et al. AiATrack: Attention in attention for Transformer visual tracking[C]?∥ Computer Vision-ECCV 2022. Cham: Springer Interational Publishing, 2022: 146-164.
[27] CHEN X, YAN B, ZHU J W, et al. Transformer tracking[C]?∥2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 8122-8131.
[28] FU Z H, FU Z H, LIU Q J, et al. SparseTT: Visual tracking with sparse Transformers[C]?∥Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. 2022: 905-912.
Outlines

/