电子与控制

面向未知区域深度测量的序列图像稠密点特征生成算法

  • 马旭 ,
  • 程咏梅 ,
  • 郝帅 ,
  • 陈克喆 ,
  • 王涛
展开
  • 1. 西北工业大学 自动化学院, 西安 710072;
    2. 西安科技大学 电气与控制工程学院, 西安 710054;
    3. 中国航空工业集团 西安飞行自动控制研究所重点实验室, 西安 710065
马旭 女,博士研究生。主要研究方向:视觉导航,模式识别和图像处理。E-mail:maxucat@gmail.com;郝帅 男,讲师,博士。主要研究方向:视觉导航,模式识别和图像处理,电能质量分析。Tel:029-88778499 E-mail:hsh000@163.com

收稿日期: 2014-05-07

  修回日期: 2014-11-02

  网络出版日期: 2014-11-06

基金资助

国家自然科学基金(60702066,61074155);西安市科技计划项目(CXY1350(2))

Dense point feature generation algorithm based on monocular sequence images for depth measurement of unknown zone

  • MA Xu ,
  • CHENG Yongmei ,
  • HAO Shuai ,
  • CHEN Kezhe ,
  • WANG Tao
Expand
  • 1. College of Automation, Northwestern Polytechnical University, Xi'an 710072, China;
    2. School of Electrical and Control Engineering, Xi'an University of Science and Technology, Xi'an 710054, China;
    3. Key Laboratory of Xi'an Flight Automatic Control Research Institute, Aviation Industry Corporation of China, Xi'an 710065, China

Received date: 2014-05-07

  Revised date: 2014-11-02

  Online published: 2014-11-06

Supported by

National Natural Science Foundation of China (60702066, 61074155); Xi'an Science and Technology Project (CXY1350(2))

摘要

对未知着降区平坦度测量是无人机在复杂地形下安全着陆的关键问题。首先,根据小孔成像原理推导出基于单目序列图像的未知区域深度计算方程;其次,针对稀疏匹配存在深度信息重构误差大而稠密匹配在平滑区域误匹配率高的问题,提出一种基于Delaunay三角剖分的稠密点特征生成算法;然后,分别对序列图像中的2帧图像提取亚像素级Harris角点和尺度不变特征变换(SIFT)特征点,并分别进行特征点匹配;再以2种特征点间的欧氏距离作为约束条件将2种特征点进行融合,生成准稠密特征点;最后,将准稠密特征点进行Delaunay三角剖分,并根据每个剖分三角形上3个顶点像素偏差的方差值制定稠密特征点的生成策略,并结合所提出的深度计算方程计算整个未知区域各点的深度信息。通过Vega Prime(VP)搭建仿真演示验证系统,实验结果表明在机载相机距地面400 m处计算高度分别为90 m和55 m的物体深度信息时,其深度测量相对误差不超过0.89%,具有较高的精度。

本文引用格式

马旭 , 程咏梅 , 郝帅 , 陈克喆 , 王涛 . 面向未知区域深度测量的序列图像稠密点特征生成算法[J]. 航空学报, 2015 , 36(2) : 596 -604 . DOI: 10.7527/S1000-6893.2014.0308

Abstract

It is essential to measure the flatness of an unknown zone for UAV landing in a complex terrain. Firstly, a depth calculation equation based on monocular sequence images is derived according to the pinhole imaging principle. Secondly, a dense point feature generation algorithm based on Delaunay triangulation is proposed to solve the problem that the large error of depth information reconstruction exists in sparse matching and the problem that high false match rate based on dense matching is high in the smooth region. Then, sub pixel Harris corner and scale invariant feature transform (SIFT) feature points are extracted and matched respectively in two frames which are selected from sequence images. After that, the two type feature points are fused under the conditions of Euclidean distance between them. So quasi dense feature points can be obtained. Finally, quasi dense feature points are Delaunay triangulated and dense feature points generating strategy is developed according to the variance of the three vertex pixel deviation in each triangulation triangle. Depth information of the whole unknown zone is calculated according to the proposed depth calculation equation. A simulation demonstration system is built by Vega Prime (VP) simulation and experimental results show that the relative depth measurement error of two objects whose height are 90 m and 55 m are less than 0.89% when the airborne camera is 400 m above the ground. The experimental results verify that the proposed algorithm has high accuracy.

参考文献

[1] Fitzgerald D, Walker R, Campbell D. A vision based forced landing site selection system for an autonomous UAV[C]//Proceedings of the 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing. Piscataway, NJ: IEEE, 2005: 397-402.

[2] Mejias L, Fitzgerald D L, Eng P C, et al. Forced landing technologies for unmanned aerial vehicles: towards safer operations[J]. Aerial Vehicles, 2009: 415-442.

[3] Scherer S, Chamberlain L, Singh S. Autonomous landing at unprepared sites by a full-scale helicopter[J]. Robotics and Autonomous Systems, 2012, 60(12): 1545-1562.

[4] Chen F Z, Chen Z Y, Ding Z, et al. Filling holes in point cloud with radial basis function[J]. Journal of Computer-aided Design & Computer Graphics, 2006, 18(9): 1414-1419 (in Chinese). 陈飞舟, 陈志杨, 丁展, 等. 基于径向基函数的残缺点云数据修复[J]. 计算机辅助设计与图形学学报, 2006, 18(9): 1414-1419.

[5] Yoon H J, Hwang Y C, Cha E Y. Real-time container position estimation method using stereo vision for container auto-landing system[C]//Proceedings of 2010 International Conference on Control Automation and System. Piscataway, NJ: IEEE, 2010: 872-876.

[6] Bradley D, Boubekeur T, Heidrich W. Accurate multi-view reconstruction using robust binocular stereo and surface meshing[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2008: 1-8.

[7] Theodore C T, Tischler M B. Precision autonomous landing adaptive control experiment (PALACE), ADM002075[R]. Washington D.C.: National Aeronautics and Space Administration Moffett Field Ca Ames Research Center, 2006.

[8] Xu Z H, Zhang F, Sun F M, et al. Quasi-dense matching by neighborhood transfer for fish-eye images[J]. Acta Automatica sinca, 2009, 35(9): 1159-1167 (in Chinese). 许振辉, 张峰, 孙凤梅, 等. 基于邻域传递的鱼眼图像的准稠密匹配[J]. 自动化学报, 2009, 35(9): 1159-1167.

[9] Sidhu H S, Kumar S, Das A, et al. A robust area based disparity estimation technique for stereo vision applications[C]//Proceedings of 2011 International Conference on Image Information Processing. Piscataway, NJ: IEEE, 2011: 1-4.

[10] Tuytelaars T, Van G L. Matching widely separated views based on affine invariant regions[J]. International Journal of Computer Vision, 2004, 59(1): 61-85.

[11] Ji H, Wu Y, Sun H, et al. SIFT feature matching algorithm with global information[J]. Optics and Precision Engineering, 2009, 17(2): 439-444.

[12] Li X M, Qian X X. Structure scene reconstruction based on quasi-dense match propagation[J]. Journal of Computer-aided Design & Computer Graphics, 2011, 23(5): 849-854 (in Chinese). 李晓明, 秦茜茜. 基于准稠密匹配的结构化场景三维重建[J]. 计算机辅助设计与图形学学报, 2011, 23(5): 849-854.

[13] Cech J, Sara R. Efficient sampling of disparity space for fast and accurate matching[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2007: 1-8.

[14] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.

[15] Hao S, Cheng Y M, Ma X, et al. Cooperative object corner detection algorithm for visual landing of unmanned helicopter[J]. Journal of Northwestern Polytechnical University, 2013, 31(4): 653-659 (in Chinese). 郝帅, 程咏梅, 马旭, 等. 无人直升机视觉着舰中合作目标角点检测算法[J]. 西北工业大学学报, 2013, 31(4): 653-659.

[16] Yi M, Guo B L. Aerial video image registration method based on invariant features and mapping restraint[J]. Acta Aeronautica et Astronautica Sinica, 2012, 33(10): 1872-1880 (in Chinese). 易盟, 郭宝龙. 基于不变特征和映射抑制的航拍视频图像配准[J]. 航空学报, 2012, 33(10): 1872-1880.

[17] Zhang Z, Deriche R, Faugeras O, et al. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry[J]. Artificial Intelligence, 1995, 78(1): 87-119.

[18] Zhang S H, Qu X, Ma S, et al. A dense stereo matching algorithm based on triangulation[J]. Journal of Computer Information Systems, 2012, 8(1): 283-292.

[19] Liebeherr J, Nahas M, Si W. Application-layer multicasting with delaunay triangulation overlays[J]. IEEE Journal on Selected Areas in Communications, 2002, 20(8): 1472-1488.

[20] Zhang Y M, Ma L, Zhang R. A quick image registration algorithm based on delaunay triangulation[J]. TELKOMNIKA, 2013, 11(2): 761-773.

[21] Sandwell D T. Biharmonic spline interpolation of GEOS-3 and SEASAT altimeter data[J]. Geophysical Research Letters, 1987, 14(2): 139-142.

文章导航

/