Stereo visual-inertial SLAM algorithm based on merge of point and line features

  • ZHAO Liangyu ,
  • JIN Rui ,
  • ZHU Yeqing ,
  • GAO Fengjie
Expand
  • 1. School of Aerospace Engineering, Beijing Institute of Technology, Beijing 100081, China;
    2. Hiwing Aviation General Equipment Co., Ltd., Beijing 100070, China

Received date: 2020-12-16

  Revised date: 2020-12-31

  Online published: 2021-03-01

Supported by

National Key R&D Program of China (2017YFC0806700); National Natural Science Foundation of China (12072027, 11532002); Open Research Project of The Beijing Key Laboratory of High Dynamic Navigation Technology under grant (HDN2021101)

Abstract

In indoor weakly textured environment, it is difficult for the SLAM algorithm based on point features to track sufficient effective point features, which leads to low accuracy and robustness, and even causes the system to fail completely. For this problem, a stereo visual SLAM algorithm is proposed based on point and line features and the Inertial Measurement Unit (IMU). The data association accuracy is improved by using the complementation of point and line features, and meanwhile the IMU data is incorporated to provide prior and scale information for the visual localization algorithm. More accurate visual pose is estimated by minimizing multiple residuals function. The environment point and line feature map, dense map and navigation map are then constructed. To overcome the disadvantages of traditional line feature extraction algorithms, which are easy to cause detection of a large number of short and similar line segment features and over-segmentation of line segments in complex scenes. The strategies of length suppression, near line merging and short line chaining are introduced, and an improved FLD algorithm is proposed to reduce the mismatch rate of the line features, and the running speed of the algorithm proposed is more than twice of that of the LSD algorithm. By comparing the simulation results obtained from multiple groups of public datasets and real-world weak texture scenes, it can be seen that the proposed algorithm can obtain richer environment maps with great positioning accuracy and good robustness.

Cite this article

ZHAO Liangyu , JIN Rui , ZHU Yeqing , GAO Fengjie . Stereo visual-inertial SLAM algorithm based on merge of point and line features[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2022 , 43(3) : 325117 -325117 . DOI: 10.7527/S1000-6893.2021.25117

References

[1] 尚天祥, 王景川, 董凌峰, 等. 月面环境三维激光SLAM技术[J]. 航空学报, 2021, 42(1):524166. SHANG T X, WANG J C, DONG L F, etal. 3D lidar slam technology in lunar environment[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(1):524166(in Chinese).
[2] 赵良玉, 朱叶青, 金瑞. 多旋翼无人机单目V-SLAM研究综述[J]. 航空兵器, 2020, 27(2):1-14. ZHAO L Y, ZHU Y Q, JIN R.Influence of review of monocular V-SLAM for multi-rotor unmanned aerial vehicle[J]. Aero Weaponry, 2020, 27(2):1-14(in Chinese).
[3] 关翔中, 蔡晨晓, 翟文华, 等. 基于神经网络补偿的室内无人机组合导航系统[J]. 航空学报, 2020, 41(S1):723790. GUAN X Z, CAI C X, ZHAI W H, et al.Indoor integrated navigation system for unmanned aerial vehicles based on neural network predictive compensation[J]. Acta Aeronautica et Astronautica Sinica, 2020, 41(S1):723790(in Chinese).
[4] 曹娟娟, 房建成, 盛蔚, 等. 低成本多传感器组合导航系统在小型无人机自主飞行中的研究与应用[J]. 航空学报, 2009, 30(10):1923-1929. CAO J J, FANG J C, SHENG W, et al. Study and application of low-cost multi-sensor integrated navigation for small UAV autonomous flight[J]. Acta Aeronautica et Astronautica Sinica, 2009, 30(10):1923-1929(in Chinese).
[5] 谢洪乐, 陈卫东, 范亚娴, 等. 月面特征稀疏环境下的视觉惯性SLAM方法[J]. 航空学报, 2021, 42(1):524169. XIE H L, CHEN W D, FAN Y X, et al. Visual-inertialslam in featureless environments on lunar surface[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(1):524169(in Chinese).
[6] LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3):314-334.
[7] QIN T, LI P L, SHEN S J.VINS-Mono:A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4):1004-1020.
[8] KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]//20076th IEEE and ACM International Symposium on Mixed and Augmented Reality.Piscataway:IEEE Press, 2007:225-234.
[9] MUR-ARTAL R, TARDÓS J D. ORB-SLAM2:An open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5):1255-1262.
[10] GOMEZ-OJEDA R, BRIALES J, GONZALEZ-JIMENEZ J. PL-SVO:Semi-direct monocular visual odometry by combining points and line segments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway:IEEE Press, 2016:4211-4216.
[11] GOMEZ-OJEDA R, MORENO F A, ZUÑIGA-NOËL D, et al. PL-SLAM:A stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3):734-746.
[12] PUMAROLA A, VAKHITOV A, AGUDO A, et al. PL-SLAM:Real-time monocular visual SLAM with points and lines[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). Piscataway:IEEE Press, 2017:4503-4508.
[13] HE Y J, ZHAO J, GUO Y, et al. PL-VIO:Tightly-coupled monocular visual-inertial odometry using point and line features[J]. Sensors, 2018, 18(4):1159.
[14] ZOU D P, WU Y X, PEI L, et al.StructVIO:Visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics, 2019, 35(4):999-1013.
[15] GIOI R G V, JAKUBOWICZ J, MOREL J M, et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4):722-732.
[16] ZHANG L, KOCH R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013, 24(7):794-805.
[17] FORSTER C, CARLONE L, DELLAERT F, et al. On-manifold preintegration for real-time visual:Inertial odometry[J]. IEEE Transactions on Robotics, 2017, 33(1):1-21.
[18] HORNUNG A, WURM K M, BENNEWITZ M, et al.Octomap:An efficient probabilistic 3D mapping framework based on octrees[J]. Autonomous Robots, 2013, 34(3):189-206.
[19] CURLESS B, LEVOY M. A volumetric method for building complex models from range images[C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New York:ACM Press, 1996:303-312.
[20] AKINLAR C, TOPAL C.EDLines:A real-time line segment detector with a false detection control[J]. Pattern Recognition Letters, 2011, 32(13):1633-1642.
[21] LEE J H, LEE S, ZHANG G X, et al. Outdoor place recognition in urban environments using straight lines[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). Piscataway:IEEE Press, 2014:5550-5557.
[22] BARTOLI A, STURM P. The 3D line motion matrix and alignment of line reconstructions[C]//Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2001.
[23] BARTOLI A, STURM P. Structure-from-motion using lines:Representation, triangulation, and bundle adjustment[J]. Computer Vision and Image Understanding, 2005, 100(3):416-441.
[24] 王丹, 黄鲁, 李垚. 基于点线特征的单目视觉同时定位与地图构建算法[J]. 机器人, 2019, 41(3):392-403. WANG D, HUANGL, LI Y. A monocular visual slam algorithm based on point-line feature[J]. Robot, 2019, 41(3):392-403(in Chinese).
[25] NEWCOMBE R A, IZADI S, HILLIGES O, et al.Kinect Fusion:Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. Piscataway:IEEE Press, 2011:127-136.
[26] OLEYNIKOVA H, TAYLOR Z, FEHR M, et al.Voxblox:Incremental 3D euclidean signed distance fields for on-board MAV planning[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway:IEEE Press, 2017:1366-1373.
[27] BURRI M, NIKOLIC J, GOHL P, et al. The EuROC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10):1157-1163.
[28] QIN T,PAN J, CAO S, et al. A general optimization-based framework for local odometry estimation with multiple sensors[J]. arXiv preprint:1901.03638, 2019.
Outlines

/