电子电气工程与控制

基于自适应Siamese网络的无人机目标跟踪算法

  • 刘芳 ,
  • 杨安喆 ,
  • 吴志威
展开
  • 北京工业大学 信息学部, 北京 100124

收稿日期: 2019-09-02

  修回日期: 2019-09-17

  网络出版日期: 2019-10-17

基金资助

国家自然科学基金(61171119)

Adaptive Siamese network based UAV target tracking algorithm

  • LIU Fang ,
  • YANG Anzhe ,
  • WU Zhiwei
Expand
  • Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China

Received date: 2019-09-02

  Revised date: 2019-09-17

  Online published: 2019-10-17

Supported by

National Natural Science Foundation of China (61171119)

摘要

无人机已被广泛应用到军事和民用领域,目标跟踪是无人机应用的关键技术之一。针对无人机跟踪过程中目标易发生形变、遮挡等问题,提出一种基于自适应Siamese网络的无人机目标跟踪算法。首先,利用2个卷积网络构建一个5层Siamese网络,通过对模板特征与当前帧图像特征进行卷积得到目标位置;其次,利用高斯混合模型对以往的预测结果进行建模并建立目标模板库;然后,从模板库中挑选出最可靠的目标模板并以此更新Siamese网络的匹配模板,使Siamese网络能够自适应目标的外观变化;最后,引入回归模型进一步精确目标位置,降低背景对网络性能的影响。仿真实验结果表明:该算法有效降低了形变、遮挡等情况对跟踪性能的影响,具有较高的准确率。

本文引用格式

刘芳 , 杨安喆 , 吴志威 . 基于自适应Siamese网络的无人机目标跟踪算法[J]. 航空学报, 2020 , 41(1) : 323423 -323423 . DOI: 10.7527/S1000-6893.2019.23423

Abstract

UAVs have been widely used in military and civilian applications. Target tracking is one of the key technologies for UAV applications. Aiming at the problem that the target is prone to deformation and occlusion during the tracking process of the UAV, a target tracking algorithm for UAV based on adaptive Siamese network is proposed. Firstly, using two convolution networks, a 5-layer Siamese network is constructed. The target location is obtained by convolving the template features with the current frame image features. Secondly, the Gaussian mixture model is used to model the previous prediction results and establish the target template library. Thirdly, the most reliable target template is selected from the template library to update the matching template of the Siamese network, so that the Siamese network can adapt to the target. Finally, a regression model is introduced to further pinpoint the target location and reduce the impact of background on network performance. The results show that the algorithm effectively reduces the influence of deformation and occlusion on tracking performance and are highly accurate.

参考文献

[1] 管皓, 薛向阳, 安志勇. 深度学习在视频目标跟踪中的应用进展与展望[J]. 自动化学报, 2016, 42(6):834-847. GUAN H, XUE X Y, AN Z Y. Advances on application of deep learning for video object tracking[J]. Acta Automatica Sinica, 2016, 42(6):834-847(in Chinese).
[2] 李靖, 马晓东, 陈怀民, 等. 无人机视觉导航着陆地标实时检测跟踪方法[J]. 西北工业大学学报, 2018, 36(2):294-301. LI J, MA X D, CHEN H M, et al. Real-time detection and tracking method of landmark based on UAV visual navigation[J]. Journal of Northwestern Polytechnical University, 2018, 36(2):294-301(in Chinese).
[3] HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3):583-596.
[4] LI Y, ZHU J. A scale adaptive kernel correlation filter tracker with feature integration[C]//European Conference on Computer Vision. Berlin:Springer, 2014:254-265.
[5] HONG Z, CHEN N Z, WANG C, et al. MUlti-Store Tracker (MUSTer):A cognitive psychology inspired approach to object tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press,2015:749-758.
[6] 赵洲, 黄攀峰, 陈路. 一种融合卡尔曼滤波的改进时空上下文跟踪算法[J]. 航空学报, 2017, 38(2):320306. ZHAO Z, HUANG P F, CHEN L. A tracking algorithm of improved spatio-temporal context with Kalman filter[J]. Acta Aeronautica et Astronautica Sinica, 2017, 38(2):320306(in Chinese).
[7] DANELLJAN M, HAGER G, KHAN F S, et al. Convolutional features for correlation filter based visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision Workshops. Piscataway, NJ:IEEE Press, 2015:58-66.
[8] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional Siamese networks for object tracking[C]//European Conference on Computer Vision. Berlin:Springer, 2016:850-865.
[9] VALMADRE J, BERTINETTO L, HENRIQUES J F, et al. End-to-end representation learning for correlation filter based tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2017:2805-2813.
[10] WANG Q, GAO J, XING J, et al. DCFNet:Discriminant correlation filters network for visual tracking[EB/OL]. (2017-04-13)[2019-08-20]. https://arxiv.org/pdf/1704.04057.pdf.
[11] DANELLJAN M, BHAT G, KHAN F S, et al. ECO:Efficient convolution operators for tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NI:IEEE Press, 2017:6638-6646.
[12] LIU F, GONG C, HUANG X, et al. Robust visual tracking revisited:From correlation filter to template matching[J]. IEEE Transactions on Image Processing, 2018, 27(6):2777-2790.
[13] IOFFE S, SZEGEDY C. Batch normalization:Accelerating deep network training by reducing internal covariate shift[EB/OL]. (2015-03-02)[2019-08-20]. https://arxiv.org/pdf/1502.03167.pdf.
[14] 张园强, 查宇飞, 库涛, 等. 基于多实例回归模型的视觉跟踪算法研究[J]. 电子与信息学报, 2018, 40(5):1202-1209. ZHANG Y Q, ZHA Y F, KU T, et al. Visual object tracking based on multi-exemplar regression model[J]. Journal of Electronics & Information Technology, 2018, 40(5):1202-1209(in Chinese).
[15] DECLERCQ A, PIATER J H. Online learning of gaussian mixture models-a two-level approach[C]//VISAPP 2008:Proceedings of the Third International Conference on Computer Vision Theory and Applications, 2008:605-611.
[16] GIRSHICK R, DONAHUE J, DARRELLAND T, et al. Rich feature hierarchies for object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2014:580-587.
[17] MUELLER M, SMITH N, GHANEM B. A benchmark and simulator for UAV tracking[J]. Far East Journal of Mathematical Sciences, 2016, 2(2):445-461.
[18] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3):211-252.
[19] LI F, TIAN C, ZUO W, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2018:4904-4913.
[20] HUANG C, LUCEY S, RAMANAN D. Learning policies for adaptive tracking with deep feature cascades[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 2017:105-114.
[21] DANELLJAN M, HAGER G, SHAHBAZ K F, et al. Learning spatially regularized correlation filters for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 2015:4310-4318.
[22] ZHANG J, MA S, SCLAROFF S. MEEM:Robust tracking via multiple experts using entropy minimization[C]//European Conference on Computer Vision. Berlin:Springer, 2014:188-203.
[23] HARE S, GOLODETZ S, SAFFARI A, et al. Struck:Structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(10):2096-2109.
[24] DANELLJAN M, HAGER G, KHAN F, et al. Accurate scale estimation for robust visual tracking[C]//British Machine Vision Conference. Guildford:BMVA Press, 2014:1-5.
[25] KALAL Z, MIKOLAJCZYK K, MATAS J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34(7):1409-1422.
[26] JIA X, LU H, YANG M H. Visual tracking via adaptive structural local sparse appearance model[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2012:1822-1829.
[27] HENRIQUES J F, CASEIRO R, MARINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]//European Conference on Computer Vision. Berlin:Springer, 2012:702-715.
[28] LI B, YAN J, WU W, et al. High performance visual tracking with Siamese region proposal network[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2018:8971-8980.
[29] GUO Q, WEI F, ZHOU C, et al. Learning dynamic siamese network for visual object tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 2017:1763-1771.
文章导航

/