材料工程与机械制造

视觉与惯性融合的多旋翼飞行机器人室内定位技术

  • 张怀捷 ,
  • 马静雅 ,
  • 刘浩源 ,
  • 郭品 ,
  • 邓慧超 ,
  • 徐坤 ,
  • 丁希仑
展开
  • 1.北京航空航天大学 机器人研究所,北京  100191
    2.北京空间飞行器总体设计部,北京  100094

收稿日期: 2022-01-17

  修回日期: 2022-02-17

  录用日期: 2022-03-23

  网络出版日期: 2022-04-12

基金资助

国家自然科学基金(91748201)

Indoor positioning technology of multi⁃rotor flying robot based on visual-inertial fusion

  • Huaijie ZHANG ,
  • Jingya MA ,
  • Haoyuan LIU ,
  • Pin GUO ,
  • Huichao DENG ,
  • Kun XU ,
  • Xilun DING
Expand
  • 1.Robotics Institute,Beihang University,Beijing  100191,China
    2.Institute of Spacecraft System Engineering CAST,Beijing  100094,China

Received date: 2022-01-17

  Revised date: 2022-02-17

  Accepted date: 2022-03-23

  Online published: 2022-04-12

Supported by

National Natural Science Foundation of China(91748201)

摘要

随着人工智能技术的发展,无人机的应用场景趋向多元,人们对无人机的需求也不仅仅满足于简单的飞行任务,而是赋予其飞行机器人的角色,对其自主导航、复杂环境下的定位以及智能协同方面提出了更高的要求。针对室内场景下的定位需求,融合视觉与惯性数据实现了多旋翼飞行机器人的室内定位。在视觉前端加入图像增强算法以提高图像灰度对比度,减少了光流跟踪的误匹配点数。提出了一种基于图像信息的特征点提取和图像帧发布策略提高了定位精度,解决了室内环境下的定位漂移问题。针对飞行机器人室内自主跟踪及降落任务,设计了基于视觉定位的飞行机器人自主降落系统。在Gazebo中搭建飞行机器人模型仿真验证自主降落系统有效性,在EuRoC数据集下对定位算法进行对比评估,搭建飞行机器人平台在真实场景下进行室内定位实验,完成了室内场景下平台自主跟踪及降落任务,并采用运动捕捉系统获取的定位真值数据进行了误差分析,结果表明该定位技术满足室内场景下的自主跟踪及降落任务需求。

本文引用格式

张怀捷 , 马静雅 , 刘浩源 , 郭品 , 邓慧超 , 徐坤 , 丁希仑 . 视觉与惯性融合的多旋翼飞行机器人室内定位技术[J]. 航空学报, 2023 , 44(5) : 426964 -426964 . DOI: 10.7527/S1000-6893.2022.26964

Abstract

With the development of artificial intelligence technology, the application scenarios of UAV tend to be diverse. People’s demand for UAV is not only satisfied with flight, but also endows it with the role of flying robot, imposing higher requirements for its autonomous navigation, positioning in complex environment and intelligent cooperation. According to the positioning requirements of indoor scene, the indoor positioning of multi rotor flying robot is realized by integrating vision and inertial data. Besides, an image enhancement algorithm is added to the visual front end to improve the gray contrast of the image. Aiming at the drift problem in visual-inertial fusion positioning of flying robot, a strategy of feature point extraction and image frame release based on image information is proposed to improve the positioning accuracy. Aiming at the indoor autonomous tracking and landing task of flying robot, a flying robot autonomous landing system based on visual positioning is designed. Moreover, a flying robot model is built in Gazebo to verify its effectiveness. The positioning algorithms are compared and evaluated under the EuRoC dataset. A flying robot platform is built in the real scene for indoor positioning experiments. The task of autonomous tracking and landing of ground platform in indoor scene is completed. The error analysis is carried out by using the positioning truth value provided by the motion capture system. The results show that the positioning technology can meet the requirements of autonomous tracking and landing tasks in indoor scenes.

参考文献

1 DING X L, GUO P, XU K, et al. A review of aerial manipulation of small-scale rotorcraft unmanned robotic systems[J]. Chinese Journal of Aeronautics201932(1): 200-214.
2 丁希仑, 俞玉树. 一种多旋翼多功能空中机器人及其腿式壁面行走运动规划[J]. 航空学报201031(10): 2075-2086.
  DING X L, YU Y S. A multi-propeller and multi-function aero-robot and its motion planning of leg-wall-climbing[J]. Acta Aeronautica et Astronautica Sinica201031(10): 2075-2086 (in Chinese).
3 赵良玉,李丹,赵辰悦,等 .无人机自主降落标识检测方法若干研究进展[J].航空学报202242(9):025882.
  ZHAO L Y, LI D, ZHAO C Y, et al. Some achievements on detection methods of autonomous landing markers for UAV[J]. Acta Aeronautica et Astronautica Sinica202243(9):025882 (in Chinese).
4 SARIPALLI S, MONTGOMERY J F, SUKHATME G S. Visually guided landing of an unmanned aerial vehicle[J]. IEEE Transactions on Robotics and Automation200319(3): 371-380.
5 曾聪. 基于视觉反馈的微型四旋翼飞行器自主着陆系统研究及应用[D]. 武汉: 武汉科技大学, 2018: 4-7.
  ZENG C. Research and application of autonomous landing system for mini-quadrotor based on visual feedback[D]. Wuhan: Wuhan University of Scienece and Technology, 2018: 4-7 (in Chinese).
6 ZENG F C, SHI H Q, WANG H. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle[C]∥2009 International Conference on Information and Automation, 2009: 117-122.
7 席瑞, 李玉军, 侯孟书. 室内定位方法综述[J]. 计算机科学201643(4): 1-6, 32.
  XI R, LI Y J, HOU M S. Survey on indoor localization[J]. Computer Science201643(4): 1-6, 32 (in Chinese).
8 孙大洋, 章荣炜, 李赞. 室内定位技术综述[J]. 无人系统技术20203(3): 32-46.
  SUN D Y, ZHANG R W, LI Z. Survey of indoor localization[J]. Unmanned Systems Technology20203(3): 32-46 (in Chinese).
9 代孝俊. 基于RFID室内定位的研究[D]. 成都: 成都信息工程大学, 2019: 11-19.
  DAI X J. Research on indoor location based on RFID[D]. Chengdu: Chengdu University of Information Technology, 2019: 11-19 (in Chinese).
10 高翔. 视觉 SLAM 十四讲: 从理论到实践[M]. 北京: 电子工业出版社, 2017: 19.
  GAO X. Lecture 14 on visual SLAM: from theory to practice[M]. Beijing: Publishing House of Electronics Industry, 2017: 19 (in Chinese).
11 DAVISON A J, REID I D, MOLTON N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence200729(6): 1052-1067.
12 MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics201531(5): 1147-1163.
13 ENGEL J, SCHOPS T, CREMERS D. LSD-SLAM: Large-scale direct monocular SLAM[C]∥Computer Vision - ECCV 2014 : Part II, 2014: 834-849.
14 ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence201740(3): 611-625.
15 TATENO K, TOMBARI F, LAINA I, et al. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6243-52.
16 MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]∥2007 IEEE International Conference on Robotics and Automation (ICRA), 2007: 3565-3572.
17 QIN T, LI P L, SHEN S J. VINS-Mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics201834(4): 1004-1020.
18 CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics202137(6): 1874-90.
19 SHI J. Good features to track[C]∥1994 Proceedings of IEEE conference on computer vision and pattern recognition, 1994: 593-600.
20 ROSTEN E, DRUMMOND T. Machine learning for high-speed corner detection[C]∥European Conference on Computer Vision, 2006: 430-443.
21 LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision200460(2): 91-110.
22 BOUGUET J Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm[J]. Intel Corporation20015(1-10): 4.
23 谢洪乐, 陈卫东, 范亚娴, 等. 月面特征稀疏环境下的视觉惯性SLAM方法[J]. 航空学报202142(1): 524169.
  XIE H L, CHEN W D, FAN Y X, et al. Visual-inertial SLAM in featureless environments on lunar surface[J]. Acta Aeronautica et Astronautica Sinica202142(1): 524169 (in Chinese).
24 BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research201635(10): 1157-63.
文章导航

/