基于改进HOG特征的空间非合作目标检测
收稿日期: 2015-01-04
修回日期: 2015-03-11
网络出版日期: 2015-03-18
基金资助
国家自然科学基金(11272256,61005062)
Space non-cooperative target detection based on improved features of histogram of oriented gradient
Received date: 2015-01-04
Revised date: 2015-03-11
Online published: 2015-03-18
Supported by
National Natural Science Foundation of China(11272256,61005062)
传统的非合作目标检测方法大都基于一定的匹配模板,这不仅需要预先指定先验信息,进而设计合适的检测模板,而且同一模板只能对具有相似形状的目标进行检测,不易直接用于检测形状未知的非合作目标。为降低检测过程中对目标形状等先验信息的要求,借鉴基于规范化梯度的物体区域估计方法,提出一种基于改进方向梯度直方图特征的目标检测方法,首先构建包含有自然图像和目标图像的训练数据集;然后提取标记区域的改进方向梯度直方图特征,以更好地保持局部特征的结构性,并根据级联支持向量机训练模型,从数据集中自动学习目标物体的判别特征;最后,将训练后的模型用于检测测试集图像中的目标。实验结果表明,算法在由4953幅和100幅图像构成的测试集中分别取得94.5%和94.2%的检测率,平均每幅图像的检测时间约为0.031 s,具有较低的时间开销,且对目标的旋转及光照变化具有一定的鲁棒性。
陈路 , 黄攀峰 , 蔡佳 . 基于改进HOG特征的空间非合作目标检测[J]. 航空学报, 2016 , 37(2) : 717 -726 . DOI: 10.7527/S1000-6893.2015.0072
Traditional non-cooperative target detection methods are mostly based on different matching templates which are well-designed with additional prior information. Moreover, one single template can be merely used to detect objects with similar shapes and structures, causing low applicability in detecting non-cooperative targets whose prior information are usually unknown. In order to solve those problems and inspired by the object estimation technique based on normed gradient, an object detection algorithm using improved features of histogram of oriented gradient is proposed. A training data set composed of natural images and target images is first built manually. Secondly, we extract the modified HOG information in the labeled regions to preserve detailed structures of the local features. Then, the cascaded support vector machine is used to train the model autonomously, which does not require prior information. Finally, we design several tests using the trained model to detect targets from the testing images. Numerous experiments demonstrate that the detection rates of the proposed method are 94.5% and 94.2% respectively when applied to testing sets with 4 953 and 100 images. The time consumption of extracting one image is about 0.031 s while it is robust to object rotation and illumination under certain condition.
[1] HUANG P F, CAI J, MENG Z J, et al. Novel method of monocular real-time feature point tracking for tethered space robots[J]. Journal of Aerospace Engineering, 2014, 27(6):04014039.
[2] 徐文福, 梁斌, 李成, 等. 空间机器人捕获非合作目标的测量与规划方法[J]. 机器人, 2010, 32(1):61-69. XU W F, LIANG B, LI C, et al. Measurement and planning approach of space robot for capturing non-cooperative target[J]. Robot, 2010, 32(1):61-69(in Chinese).
[3] WONG S K. Non-cooperative target recognition in the frequency domain[J]. IEE Proceedings-Radar, Sonar and Navigation, 2004, 151(2):77-84.
[4] RAMIREZ V A, GUTIERREZ S A M, YANEZ R E S. Quadrilateral detection using genetic algorithms[J]. Computacióny Sistemas, 2011, 15(2):181-193.
[5] 史骏, 姜志国, 冯昊, 等. 基于弹性网稀疏编码的空间目标识别[J]. 航空学报, 2013, 34(5):1129-1139. SHI J, JIANG Z G, FENG H, et al. Elastic net sparse coding-based space object recognition[J]. Acta Aeronautica et Astronautica Sinica, 2013, 34(5):1129-1139(in Chinese).
[6] CAI J, HUANG P F, WANG D K. Novel dynamic template matching of visual servoing for tethered space robot[C]//2014 4th IEEE International Conference on Information Science and Technology(ICIST). Piscataway, NJ:IEEE Press, 2014:389-392.
[7] 徐贵力, 徐静, 王彪, 等. 基于光照模糊相似融合不变矩的航天器目标识别[J]. 航空学报, 2014, 35(3):857-867. XU G L, XU J, WANG B, et al. CIBA moment invariants and their use in spacecraft recognition algorithm[J]. Acta Aeronautica et Astronautica Sinica, 2014, 35(3):857-867(in Chinese).
[8] 李予蜀, 余农, 吴常泳, 等. 红外航空图像自动目标识别的形态滤波神经网络算法[J]. 航空学报, 2002, 23(4):368-372. LI Y S, YU N, WU C Y, et al. Morphological neural networks with applications to automatic target recognition in aeronautics infrared image[J]. Acta Aeronautica et Astronautica Sinica, 2002, 23(4):368-372(in Chinese).
[9] 黄凯奇, 任伟强, 谭铁牛. 图像物体分类与检测算法综述[J]. 计算机学报, 2014, 36(6):1225-1240. HUANG K Q, REN W Q, TAN T N. A review on image classification and detection[J]. Chines Journal of Computers, 2014, 36(6):1225-1240(in Chinese).
[10] ZITNICK C L, DOLLÁR P. Edgebox:Locating object proposals from edges[C]//Computer vision-ECCV 2014. Zurich:Springer International Publishing, 2014:391-405.
[11] HEITZ G, KOLLER D. Learning spatial context:Using stuff to find things[C]//Computer vision-ECCV 2008. Marseille:Springer, 2008:30-43.
[12] YANULEVSKAYA V, UIJLINGS J, GEUSEBROEK J M. Salient object detection:From pixels to segments[J]. Image and Vision Computing, 2013, 31(1):31-42.
[13] ALEXE B, DESELAERS T, FERRARI V. Measuring the objectness of image windows[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11):2189-2202.
[14] RAHTU E, KANNALA J, BLASCHKO M. Learning a category independent object detection cascade[C]//2011 IEEE International Conference on Computer Vision(ICCV). Piscataway, NJ:IEEE Press, 2011:1052-1059.
[15] CHENG M M, ZHANG Z M, LIN W Y, et al. BING:Binarized normed gradients for objectness estimation at 300fps[C]//2011 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Piscataway, NJ:IEEE Press, 2014.
[16] ZHANG Z, WARRELL J, TORR P H S. Proposal generation for object detection using cascaded ranking SVMs[C]//2011 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Piscataway, NJ:IEEE Press, 2011:1497-1504.
[17] EVERINGHAM M, GOOL L V, WILLIAMS C K I, et al. The pascal visual object classes(VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2):303-338.
[18] DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2005, 1:886-893.
[19] DALAL N. Finding people in images and videos[D]. Grenoble:Institut National Polytechnique de Grenoble-INPG, 2006:33-50.
[20] UIJLINGS J, VAN DE SANDE K E A, GEVERS T, et al. Selective search for object recognition[J]. International Journal of Computer Vision, 2013, 104(2):154-171.
[21] LOWE D G. Object recognition from local scale-invariant features[C]//Proceedings of the Seventh IEEE International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 1999, 2:1150-1157.
[22] JARRETT K, KAVUKCUOGLU K, RANZATO M, et al. What is the best multi-stage architecture for object recognition?[C]//2009 IEEE 12th International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 2009:2146-2153.
/
〈 | 〉 |