电子电气工程与控制

惯性信息辅助的大视角目标快速精确定位

  • 曾庆化 ,
  • 潘鹏举 ,
  • 刘建业 ,
  • 王云舒 ,
  • 刘昇
展开
  • 1. 南京航空航天大学 自动化学院, 南京 211106;
    2. 卫星通信与导航协同创新中心, 南京 211106;
    3. 中航工业洛阳电光设备研究所, 洛阳 471009

收稿日期: 2017-02-10

  修回日期: 2017-03-12

  网络出版日期: 2017-04-26

基金资助

国家自然科学基金(61533008,61104188,61374115,61603181);江苏省普通高校研究生科研创新计划项目(KYLX15_0277);中央高校基本科研业务费专项资金(NS2015037)

Fast and accurate target positioning with large viewpoint based on inertial navigation system information

  • ZENG Qinghua ,
  • PAN Pengju ,
  • LIU Jianye ,
  • WANG Yunshu ,
  • LIU Sheng
Expand
  • 1. College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China;
    2. Satellite Communication and Navigation Collaborative Innovation Center, Nanjing 211106, China;
    3. AVIC Luoyang Electro-optical Equipment Research Institute, Luoyang 471009, China

Received date: 2017-02-10

  Revised date: 2017-03-12

  Online published: 2017-04-26

Supported by

National Natural Science Foundation of China (61533008,61104188,61374115,61603181);Funding of Jiangsu Innovation Program for Graduate Education (KYLX15_0277);the Fundamental Research Funds for Central Universities (NS2015037)

摘要

目标定位技术广泛应用于航空领域的侦察机、无人机等各类侦察打击任务中,目标定位精度的高低及效率对作战效果具有重要影响。针对仿射尺度不变的特征变换(ASIFT)算法对远距离大视角目标定位精度较低、速度较慢的问题,提出了一种基于惯性信息辅助的大视角目标快速精确定位方法。该方法首先对目标实测序列图像构造尺度空间,结合FAST特征检测与FREAK特征描述的方式进行匹配,实现对待定位目标的快速提取;然后利用机载惯性信息求解目标实测图与参考图之间的透视变换矩阵,利用该矩阵对实测图进行变换以减小图像间视角差异,克服了ASIFT算法盲目匹配计算的弊端,并通过FAST特征检测与FREAK特征描述相结合的方式提升了大视角图像的匹配速度;最后通过单应性矩阵映射关系实现对目标的精确定位。实验结果表明,大视角目标快速精确定位方法匹配耗时比ASIFT算法的减小了1个数量级,定位精度比目标平均值定位算法精度提高了1个数量级,有效提高了图像匹配定位在航空领域的应用效率。

本文引用格式

曾庆化 , 潘鹏举 , 刘建业 , 王云舒 , 刘昇 . 惯性信息辅助的大视角目标快速精确定位[J]. 航空学报, 2017 , 38(8) : 321171 -321171 . DOI: 10.7527/S1000-6893.2017.321171

Abstract

Target location technology is widely used in aircraft and missiles for getting the position information of air-attack targets. The attack effect is directly affected by the efficiency of the target localization method. Considering the problem of low precision and slow speed of the Affine Scale-Invariant Feature Transform (ASIFT) algorithm, a new positioning method based on inertial navigation system information is proposed. The scale space of the target real-time sequence images is constructed, and is matched with the combination of Features from Accelerated Segment Test (FAST) and Fast REtinA Keypoint (FREAK) description. Fast extraction of the target is then achieved. The homography matrix between the real-time and the reference images is solved by using the information of the airborne inertial navigation system. The real-time images are then transformed by the homography matrix to reduce the difference between the real-time and the reference images, so as to overcome the problem of blind multiple matching calculation of the ASIFT algorithm. Through the combination of FAST feature detection and FREAK feature description, the matching speed of large viewpoint images is enhanced. The target is accurately located in the reference image by the homography matrix. The experimental results indicate that the calculation speed is increased by one order of magnitude compared with that of the ASIFT algorithm, and the positioning accuracy is increased by one order of magnitude compared with that of the existing target average positioning algorithm. The method proposed in the paper will be helpful in improving the efficiency of the figure-matching in the aerial application.

参考文献

[1] 何率天. 空地精确打击体系构成与关键技术[J]. 兵工自动化, 2016, 35(6):12-15. HE S T. Space strike system construction and key technology of[J]. Automation, 2016, 35(6):12-15(in Chinese).
[2] 邵慧. 无人机高精度目标定位技术研究[D]. 南京:南京航空航天大学, 2014:1-9. SHAO H. Research on high precision target location technology of UAV[D]. Nanjing:Nanjing University of Aeronautics & Astronautics, 2014:1-9(in Chinese).
[3] 徐诚, 黄大庆. 无人机光电侦测平台目标定位误差分析[J]. 仪器仪表学报, 2013, 34(10):2265-2270. XU C, HUANG D Q. Target location error analysis of UAV electro-optical detection platform[J]. Chinese Journal of Scientific and Instrument, 2013, 34(10):2265-2270(in Chinese).
[4] 杨帅, 程红, 李婷, 等. 无人机侦察图像目标定位在军事上的应用研究[J]. 红外技术, 2016, 38(6):467-471. YANG S, CHENG H, LI T, et al. Application of infrared technology in military UAV reconnaissance image target localization[J]. Infrared Technology, 2016, 38(6):467-471(in Chinese).
[5] JOSÉ R G B, HAROLDO F C V, GIANPAOLO C, et al. An image matching system for autonomous UAV navigation based on neural network[C]//201614th International Conference on Control, Automation, Robotics & Vision, 2016:1-6.
[6] JIANG S, CAO D, WU Y, et al. Efficient line-based lens distortion correction for complete distortion with vanishing point constraint[J]. Applied Optics, 2015, 54(14):4432-4438.
[7] 吴伟平. 局部仿射不变特征的提取技术研究[D]. 长春:长春光学精密机械与物理研究所, 2015:11-31. WU W P. Research on local affine invariant feature extraction[D]. Changchun:Changchun Institute of Optics, Fine Mechanics and Physics, 2015:11-31(in Chinese).
[8] MATAS J, CHUM O, URBAN M, et al. Robust wide-baseline stereo from maximally stable extremal regions[J]. Image & Vision Computing, 2004, 22(10):761-767.
[9] 于瑞鹏, 金丽鸿, 高宁, 等. 融合尺度和仿射不变特征的倾斜立体影像配准[J]. 测绘科学, 2016, 41(7):138-143. YU R P, JIN L H, GAO N, et al. Fusion of scale and affine invariant features for oblique stereo image registration[J]. Mapping Science, 2016, 41(7):138-143(in Chinese).
[10] MOREL J M, YU G. ASIFT:A new framework for fully affine invariant image comparison[J].SIAM Journal on Imaging Sciences, 2009, 2(2):438-469.
[11] MA X M, LIU D, ZHANG J, et al. A fast affine-invariant features for image stitching under large viewpoint changes[J]. Neurocomputing, 2015, 151:1430-1438.
[12] 阳魁. 基于惯导信息的多视角目标模板校正与融合方法研究[D].长沙:国防科学技术大学, 2011:12-18. YANG Q. Multi view object template ins information correction and fusion method of[D]. Changsha:National Defense Science and Technology University, 2011:12-18(in Chinese).
[13] 宋琳. 无人机飞行途中视觉导航关键技术研究[D]. 西安:西北工业大学, 2015:17-33. SONG L. Research on key technology of visual navigation in the flight of unmanned aerial vehicle[D]. Xi'an:Northwestern Polytechnical University, 2015:17-33(in Chinese).
[14] 马旭, 程咏梅, 郝帅, 等. 面向未知区域深度测量的序列图像稠密点特征生成算法[J]. 航空学报, 2015, 36(2):596-604. MA X, CHENG Y M, HAO S, et al. Dense point feature generation algorithm based on monocular sequence images for depth measurement of unknown zone[J]. Acta Aeronautica et Astronautica Sinica, 2015, 36(2):596-604(in Chinese).
[15] 申浩, 李书晓, 申意萍, 等. 航拍视频帧间快速配准算法[J]. 航空学报, 2013, 34(6):1405-1413. SHEN H, LI S X, SHEN Y P, et al. Fast interframe registration method in aerial videos[J]. Acta Aeronautica et Astronautica Sinica, 2013, 34(6):1405-1413(in Chinese).
[16] ROSTEN D T. Machine learning for high-speed corner detection[C]//European Conference on Computer Vision, 2006:430-443.
[17] ALAHI A, ORTIZ R, VANDERGHEYNST P. FREAK:Fast retina keypoint[C]//Proceeding of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012:510-517.
[18] FISCHLER M A, BOLLES R C. Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6):381-395
[19] 田玉刚, 杨贵, 吴蔚. 高分影像辅助下的航空高光谱严密几何检校方法[J]. 航空学报, 2015, 36(4):1250-1258. TIAN Y G,YANG G,WU W. A strict geometric calibration method for airborne hyperspectral sensors aided by high resolution images[J]. Acta Aeronautica et Astronautica Sinica, 2015, 36(4):1250-1258(in Chinese).
[20] 王云舒, 刘建业, 曾庆化, 等. 惯性信息辅助的快速大视角图像匹配方法[J]. 中国惯性技术学报, 2016, 28(4):504-510. WANG Y S, LIU J Y, ZENG Q H, et al. An image matching method with fast and large angle of view of in-ertial information[J]. Journal of Chinese Inertial technology, 2016, 28(4):504-510(in Chinese).
[21] 洪磊, 田啟良, 嵇保健. 基于单应性矩阵的线结构光参量标定法[J]. 光子学报, 2015, 44(12):113-118. HONG L, TIAN Q L, JI B J. Single line structure of optical parametric matrix calibration method[J]. Journal of Photon, 2015, 44(12):113-118(in Chinese).
[22] 刘婷婷. 基于单应性矩阵剔除SIFT错误匹配点的方法[J]. 哈尔滨商业大学学报(自然科学版), 2016, 37(1):95-98, 106. LIU T T. Method of removing SIFT error matching points based on homography matrix[J]. Journal of Harbin University of Commerce (Natura Science Edition), 2016, 37(1):95-98, 106(in Chinese).

文章导航

/