Electronics and Electrical Engineering and Control

A tracking algorithm of improved spatio-temporal context with Kalman filter

  • ZHAO Zhou ,
  • HUANG Panfeng ,
  • CHEN Lu
Expand
  • 1. Research Center for Intelligent Robotics, School of Astronautics, Northwestern Polytechnical University, Xi'an 710072, China;
    2. National Key Laboratory of Aerospace Flight Dynamics, Northwestern Polytechnical University, Xi'an 710072, China

Received date: 2016-04-11

  Revised date: 2015-06-27

  Online published: 2016-06-28

Supported by

National Natural Science Foundation of China (11272256, 61005062, 60805034)

Abstract

For the rapid target suffering from severe occlusion, the tracking accuracy of spatio-temporal context algorithm decreases. A novel tracking algorithm of improved spatio-temporal context with Kalman filter is proposed in the paper. The rectangular region of the tracking object is manually marked at the first frame, and the improved spatio-temporal context algorithm is then applied to track the target. The Euclidean distance of the image intensity in two consecutive frames determines the state of the target in the tracking process. We apply Kalman filter to reduce the influence of noise and predict and estimate the possible position of the target under severe occlusion, and obtain better rectangular region of the tracking object. The experimental results show that the algorithm of improved spatio-temporal context with Kalman filter can be used for high speed and highly maneuvering tracking target with different light intensities, and is robust for the target with varied scale and severe occlusion. Time consumption per frame is 34.07 ms. Geometric center error per frame is 5.43 pixel, 70.2% less than that via the spatio-temporal context algorithm. The contour area per frame is 13.08%, 52.7% less than that via the spatio-temporal context algorithm.

Cite this article

ZHAO Zhou , HUANG Panfeng , CHEN Lu . A tracking algorithm of improved spatio-temporal context with Kalman filter[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2017 , 38(2) : 320306 -320316 . DOI: 10.7527/S1000-6893.2016.0202

References

[1] ZHOU H Y, YUAN Y, ZHANG Y, et al. Non-rigid object tracking in complex scenes[J]. Pattern Recognition Letters, 2009, 30(2):98-102.
[2] SIVIC J, SCHAFFALITZKY F, ZISSERMAN A. Object level grouping for video shots[J]. International Journal of Computer Vision, 2006, 67(2):189-210.
[3] 张焕龙, 胡士强, 杨国胜. 基于外观模型学习的视频目标跟踪方法综述[J]. 计算机研究与发展, 2015, 52(1):177-190. ZHANG H L, HU S Q, YANG G S. Video object tracking based on appearance models learning[J]. Journal of Computer Research and Development, 2015, 52(1):177-190(in Chinese).
[4] ZIMMERMANN K, MATAS J, SVOBODA T. Tracking by an optimal sequence of linear predictors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(4):677-692.
[5] ELLIS L, DOWSON N, MATAS J, et al. Linear regression and adaptive appearance models for fast simultaneous modelling and tracking[J]. International Journal of Computer Vision, 2011, 95(2):154-179.
[6] KALAL Z, MIKOLAJCZYK K, MATAS J. Forward-backward error:Automatic detection of tracking failures[C]//Proceedings of 2010 the 20th International Conference on Pattern Recognition (ICPR). Piscataway, NJ:IEEE Press, 2010:2756-2759.
[7] ZHANG L, VAN DER MAATEN L. Preserving structure in model-free tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(4):756-769.
[8] TZIMIROPOULOS G, ZAFEIRIOU S, PANTIC M. Sparse representations of image gradient orientations for visual recognition and tracking[C]//Proceedings of 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Piscataway, NJ:IEEE Press, 2011:26-33.
[9] LIWICKI S, ZAFEIRIOU S, TZIMIROPOULOS G, et al. Efficient online subspace learning with an indefinite kernel for visual tracking and recognition[J]. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(10):1624-1636.
[10] MEI X, LING H B. Robust visual tracking and vehicle classification via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(11):2259-2272.
[11] HORBERT E, REMATAS K, LEIBE B. Level-set person segmentation and tracking with multi-region appearance models and top-down shape information[C]//Proceedings of 2011 IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ:IEEE Press, 2011:1871-1878.
[12] NICOLAS P, AURELIE B. Tracking with occlusions via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(1):144-157.
[13] POUYA B, SEYED A C, MUSA B M M. Upper body tracking using KLT and Kalman filter[J]. Procedia Computer Science, 2012, 13:185-191.
[14] WANG Y, LIU G. Head pose estimation based on head tracking and the Kalman filter[J]. Physics Procedia, 2011, 22:420-427.
[15] FU Z X, HAN Y. Centroid weighted Kalman filter for visual object tracking[J]. Measurement, 2012, 45(4):650-655.
[16] SU Y Y, ZHAO Q J, ZHAO L J, et al. Abrupt motion tracking using a visual saliency embedded particle filter[J]. Pattern Recognition, 2014, 47(5):1826-1834.
[17] 蔡佳, 黄攀峰. 基于改进SURF和P-KLT的特征点实时跟踪方法研究[J]. 航空学报, 2013, 34(5):1204-1214. CAI J, HUANG P F. Research of real-time feature point tracking method based on the combination of improved SURF and P-KLT algorithm[J]. Acta Aeronautica et Astronautica Sinica, 2013, 34(5):1204-1214(in Chinese).
[18] 高羽, 张建秋, 尹建君. 机动目标的多项式预测模型及其跟踪算法[J]. 航空学报, 2009, 30(8):1479-1489. GAO Y, ZHANG J Q, YIN J J. Polynomial prediction model and tracking algorithm of maneuver target[J]. Acta Aeronautica et Astronautica Sinica, 2009, 30(8):1479-1489(in Chinese).
[19] 甘明刚, 陈杰, 王亚楠, 等. 基于Mean Shift算法和NMI特征的目标跟踪算法研究[J]. 自动化学报, 2010, 36(9):1332-1336. GAN M G, CHEN J, WANG Y N, et al. A target tracking algorithm based on mean shift and normalized moment of inertia feature[J]. Acta Automatic Sinica, 2010, 36(9):1332-1336(in Chinese).
[20] THANG B D, NAM V, GERARD M. Context tracker:Exploring supporters and distracters in unconstrained environments[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE Press, 2011:1177-1184.
[21] YANG M, WU Y, HUA G. Context-aware visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2009, 31(7):1195-1209.
[22] GRABNER H, MATAS J, VAN G L, et al. Tracking the invisible:Learning where the object might be[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE Press, 2010:1285-1292.
[23] ZHANG K H, ZHANG L, LIU Q S, et al. Fast visual tracking via dense spatio-temporal context learning[C]//European Conference on Computer Vision (ECCV). Zurich:Springer, 2014:127-141.
[24] CHEN S Y. Kalman filter for robot vision:A survey[J]. IEEE Transactions on Industrial Electronics, 2012, 59(11):4409-4420.
[25] 王向华, 覃征, 杨新宇, 等. 基于多次卡尔曼滤波的目标自适应跟踪算法与仿真分析[J]. 系统仿真学报, 2008, 20(23):6458-6465. WANG X H, QIN Z, YANG X Y, et al. Adaptive algorithm based on multi-Kalman filter for target tracking and simulation analyses[J]. Journal of System Simulation, 2008, 20(23):6458-6465(in Chinese).
[26] WANG L, ZHANG Y, FENG J. On the Euclidean distance of images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2005, 27(8):1334-1339.
[27] WU Y, LIM J, YANG M H. Online object tracking:A benchmark[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE Press, 2013:2411-2418.
[28] WU Y, LIM J, YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1834-1848.
[29] ZHANG K, ZHANG L, YANG M H. Real-time compressive tracking[C]//European Conference on Computer Vision (ECCV). Florence:Springer, 2012:864-877.
[30] KALAL Z, MATAS J, MIKOLAJCZYK K. P-N learning:Bootstrapping linary classifiers by structural constraints[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE Press, 2010:49-56.

Outlines

/