ACTA AERONAUTICAET ASTRONAUTICA SINICA >
Indoor positioning technology of multi⁃rotor flying robot based on visual-inertial fusion
Received date: 2022-01-17
Revised date: 2022-02-17
Accepted date: 2022-03-23
Online published: 2022-04-12
Supported by
National Natural Science Foundation of China(91748201)
With the development of artificial intelligence technology, the application scenarios of UAV tend to be diverse. People’s demand for UAV is not only satisfied with flight, but also endows it with the role of flying robot, imposing higher requirements for its autonomous navigation, positioning in complex environment and intelligent cooperation. According to the positioning requirements of indoor scene, the indoor positioning of multi rotor flying robot is realized by integrating vision and inertial data. Besides, an image enhancement algorithm is added to the visual front end to improve the gray contrast of the image. Aiming at the drift problem in visual-inertial fusion positioning of flying robot, a strategy of feature point extraction and image frame release based on image information is proposed to improve the positioning accuracy. Aiming at the indoor autonomous tracking and landing task of flying robot, a flying robot autonomous landing system based on visual positioning is designed. Moreover, a flying robot model is built in Gazebo to verify its effectiveness. The positioning algorithms are compared and evaluated under the EuRoC dataset. A flying robot platform is built in the real scene for indoor positioning experiments. The task of autonomous tracking and landing of ground platform in indoor scene is completed. The error analysis is carried out by using the positioning truth value provided by the motion capture system. The results show that the positioning technology can meet the requirements of autonomous tracking and landing tasks in indoor scenes.
Huaijie ZHANG , Jingya MA , Haoyuan LIU , Pin GUO , Huichao DENG , Kun XU , Xilun DING . Indoor positioning technology of multi⁃rotor flying robot based on visual-inertial fusion[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2023 , 44(5) : 426964 -426964 . DOI: 10.7527/S1000-6893.2022.26964
1 | DING X L, GUO P, XU K, et al. A review of aerial manipulation of small-scale rotorcraft unmanned robotic systems[J]. Chinese Journal of Aeronautics, 2019, 32(1): 200-214. |
2 | 丁希仑, 俞玉树. 一种多旋翼多功能空中机器人及其腿式壁面行走运动规划[J]. 航空学报, 2010, 31(10): 2075-2086. |
DING X L, YU Y S. A multi-propeller and multi-function aero-robot and its motion planning of leg-wall-climbing[J]. Acta Aeronautica et Astronautica Sinica, 2010, 31(10): 2075-2086 (in Chinese). | |
3 | 赵良玉,李丹,赵辰悦,等 .无人机自主降落标识检测方法若干研究进展[J].航空学报, 2022, 42(9):025882. |
ZHAO L Y, LI D, ZHAO C Y, et al. Some achievements on detection methods of autonomous landing markers for UAV[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(9):025882 (in Chinese). | |
4 | SARIPALLI S, MONTGOMERY J F, SUKHATME G S. Visually guided landing of an unmanned aerial vehicle[J]. IEEE Transactions on Robotics and Automation, 2003, 19(3): 371-380. |
5 | 曾聪. 基于视觉反馈的微型四旋翼飞行器自主着陆系统研究及应用[D]. 武汉: 武汉科技大学, 2018: 4-7. |
ZENG C. Research and application of autonomous landing system for mini-quadrotor based on visual feedback[D]. Wuhan: Wuhan University of Scienece and Technology, 2018: 4-7 (in Chinese). | |
6 | ZENG F C, SHI H Q, WANG H. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle[C]∥2009 International Conference on Information and Automation, 2009: 117-122. |
7 | 席瑞, 李玉军, 侯孟书. 室内定位方法综述[J]. 计算机科学, 2016, 43(4): 1-6, 32. |
XI R, LI Y J, HOU M S. Survey on indoor localization[J]. Computer Science, 2016, 43(4): 1-6, 32 (in Chinese). | |
8 | 孙大洋, 章荣炜, 李赞. 室内定位技术综述[J]. 无人系统技术, 2020, 3(3): 32-46. |
SUN D Y, ZHANG R W, LI Z. Survey of indoor localization[J]. Unmanned Systems Technology, 2020, 3(3): 32-46 (in Chinese). | |
9 | 代孝俊. 基于RFID室内定位的研究[D]. 成都: 成都信息工程大学, 2019: 11-19. |
DAI X J. Research on indoor location based on RFID[D]. Chengdu: Chengdu University of Information Technology, 2019: 11-19 (in Chinese). | |
10 | 高翔. 视觉 SLAM 十四讲: 从理论到实践[M]. 北京: 电子工业出版社, 2017: 19. |
GAO X. Lecture 14 on visual SLAM: from theory to practice[M]. Beijing: Publishing House of Electronics Industry, 2017: 19 (in Chinese). | |
11 | DAVISON A J, REID I D, MOLTON N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 1052-1067. |
12 | MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. |
13 | ENGEL J, SCHOPS T, CREMERS D. LSD-SLAM: Large-scale direct monocular SLAM[C]∥Computer Vision - ECCV 2014 : Part II, 2014: 834-849. |
14 | ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625. |
15 | TATENO K, TOMBARI F, LAINA I, et al. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6243-52. |
16 | MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]∥2007 IEEE International Conference on Robotics and Automation (ICRA), 2007: 3565-3572. |
17 | QIN T, LI P L, SHEN S J. VINS-Mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. |
18 | CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-90. |
19 | SHI J. Good features to track[C]∥1994 Proceedings of IEEE conference on computer vision and pattern recognition, 1994: 593-600. |
20 | ROSTEN E, DRUMMOND T. Machine learning for high-speed corner detection[C]∥European Conference on Computer Vision, 2006: 430-443. |
21 | LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110. |
22 | BOUGUET J Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm[J]. Intel Corporation, 2001, 5(1-10): 4. |
23 | 谢洪乐, 陈卫东, 范亚娴, 等. 月面特征稀疏环境下的视觉惯性SLAM方法[J]. 航空学报, 2021, 42(1): 524169. |
XIE H L, CHEN W D, FAN Y X, et al. Visual-inertial SLAM in featureless environments on lunar surface[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(1): 524169 (in Chinese). | |
24 | BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-63. |
/
〈 |
|
〉 |