Electronics and Electrical Engineering and Control

UAV target tracking algorithm based on adaptive fusion network

  • LIU Fang ,
  • SUN Yanan
Expand
  • Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China

Received date: 2021-03-18

  Revised date: 2021-05-20

  Online published: 2021-05-20

Supported by

National Natural Science Foundation of China (61171119)

Abstract

To overcome the problems of small target occupation and vulnerability to interference of complex background information in the UAV video tracking process, an adaptive fusion network-based UAV target tracking algorithm is proposed. First, a deep network model is constructed based on the receptive field block and the residual network, which can effectively extract target features and enhance the effective receptive field of the features. Second, a multi-scale adaptive fusion network is proposed, which can adaptively fuse the semantic features of the deep network and detailed features of the shallow network to enhance the expression capability of the features. Finally, the fused target features are input into the correlation filtering model, and the maximum confidence score of the response map is calculated to determine the tracking target location. The simulation experimental results show that the algorithm achieves a high rate of tracking success and accuracy, and can effectively improve the performance of UAV target tracking algorithm.

Cite this article

LIU Fang , SUN Yanan . UAV target tracking algorithm based on adaptive fusion network[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2022 , 43(7) : 325522 -325522 . DOI: 10.7527/S1000-6893.2021.25522

References

[1] 刘芳,杨安喆,吴志威.基于自适应Siamese网络的无人机目标跟踪算法[J].航空学报, 2020, 41(1):323423. LIU F, YANG A Z, WU Z W. Adaptive Siamese network based UAV target tracking algorithm[J]. Acta Aeronautica et Astronautica Sinica, 2020, 41(1):323423(in Chinese).
[2] WANG N, YEUNG D Y. Learning a deep compact image representation for visual tracking[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems,2013:809-817.
[3] WANG L J, OUYANG W L, WANG X G, et al. Visual tracking with fully convolutional networks[C]//2015 IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2015:3119-3127.
[4] HONG S, YOU T, KWAK S, et al. Online tracking by learning discriminative saliency map with convolutional neural network[C]//Internatioal Conference on Machine Learning, 2015.
[5] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional Siamese networks for object tracking[C]//Computer Vision-ECCV 2016 Workshops, 2016:850-865.
[6] VALMADRE J, BERTINETTO L, HENRIQUES J, et al. End-to-end representation learning for correlation filter based tracking[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2017:5000-5008.
[7] LI B, YAN J J, WU W, et al. High performance visual tracking with Siamese region proposal network[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018:8971-8980.
[8] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.
[9] 秦莉,刘辉,尚振宏.融合上下文信息及多特征目标跟踪方法研究[J].小型微型计算机系统, 2020, 41(3):631-636. QIN L, LIU H, SHANG Z H. Research on target tracking method based on context information and multi-feature fusion[J]. Journal of Chinese Computer Systems, 2020, 41(3):631-636(in Chinese).
[10] 陈富健,谢维信.引入抗遮挡机制的SiamVGG网络目标跟踪算法[J].信号处理, 2020, 36(4):562-571. CHEN F J, XIE W X. SiamVGG network target tracking algorithm with anti-occlusion mechanism[J]. Journal of Signal Processing, 2020, 36(4):562-571(in Chinese).
[11] 李敏,吴莎.基于深度学习的粒子滤波视频目标跟踪算法[J].计算机技术与发展, 2020, 30(6):23-28. LI M, WU S. Particle filter video target tracking algorithm based on deep learning[J]. Computer Technology and Development, 2020, 30(6):23-28(in Chinese).
[12] XU Y D, WANG Z Y, LI Z X, et al. SiamFC++:Towards robust and accurate visual tracking with target estimation guidelines[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7):12549-12556.
[13] LUO W J, LI Y J, URTASUN R, et al. Understanding the effective receptive field in deep convolutional neural networks[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016:4898-4906.
[14] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2015:1-9.
[15] DAI J F, QI H Z, XIONG Y W, et al. Deformable convolutional networks[C]//2017 IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2017:764-773.
[16] LIU S, HUANG D. Receptive field block net for accurate and fast object detection[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018:385-400.
[17] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2016:770-778.
[18] MUELLER M, SMITH N, GHANEM B. A benchmark and simulator for UAV tracking[C]//Computer Vision-ECCV 2016, 2016.
[19] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018:7132-7141.
[20] DANELLJAN M, ROBINSON A, SHAHBAZ KHAN F, et al. Beyond correlation filters:Learning continuous convolution operators for visual tracking[C]//Computer Vision-ECCV 2016, 2016.
[21] DANELLJAN M, BHAT G, KHAN F S, et al. ECO:Efficient convolution operators for tracking[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2017:6931-6939.
[22] ZHU P F, WEN L Y, BIAN X, et al. Vision meets drones:A challenge[DB/OL]. arXiv preprint:1804.07437,2018.
[23] GUO D Y, WANG J, CUI Y, et al. SiamCAR:Siamese fully convolutional classification and regression for visual tracking[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE Press, 2020:6268-6276.
[24] CHEN Z D, ZHONG B N, LI G R, et al. Siamese box adaptive network for visual tracking[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE Press, 2020:6667-6676.
[25] ZHU Z, WANG Q, LI B, et al. Distractor-aware Siamese networks for visual object tracking[C]//Computer Vision-ECCV 2018, 2018.
[26] BHAT G, JOHNANDER J, DANELLJAN M, et al. Unveiling the power of deep tracking[DB/OL]. arXiv preprint:1804.06833, 2018.
[27] DAI K N, WANG D, LU H C, et al. Visual tracking via adaptive spatially-regularized correlation filters[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE Press, 2019:4665-4674.
[28] ZHANG J M, MA S G, SCLAROFF S. MEEM:Robust tracking via multiple experts using entropy minimization[C]//Computer Vision-ECCV 2014, 2014.
[29] HONG Z B, CHEN Z, WANG C H, et al. MUlti-Store Tracker (MUSTer):A cognitive psychology inspired approach to object tracking[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2015:749-758.
[30] KRISTAN M, PFLUGFELDER R, LEONARDIS A, et al. The visual object tracking VOT2014 challenge results[C]//Workshop on Visual Object Tracking Challenge (VOT2014), 2014:1-27.
Outlines

/