航空学报 > 2023, Vol. 44 Issue (5): 426964-426964   doi: 10.7527/S1000-6893.2022.26964

视觉与惯性融合的多旋翼飞行机器人室内定位技术

张怀捷1, 马静雅2, 刘浩源1, 郭品1, 邓慧超1(), 徐坤1, 丁希仑1   

  1. 1.北京航空航天大学 机器人研究所,北京  100191
    2.北京空间飞行器总体设计部,北京  100094
  • 收稿日期:2022-01-17 修回日期:2022-02-17 接受日期:2022-03-23 出版日期:2022-04-13 发布日期:2022-04-12
  • 通讯作者: 邓慧超 E-mail:denghuichao@buaa.edu.cn
  • 基金资助:
    国家自然科学基金(91748201)

Indoor positioning technology of multi⁃rotor flying robot based on visual-inertial fusion

Huaijie ZHANG1, Jingya MA2, Haoyuan LIU1, Pin GUO1, Huichao DENG1(), Kun XU1, Xilun DING1   

  1. 1.Robotics Institute,Beihang University,Beijing  100191,China
    2.Institute of Spacecraft System Engineering CAST,Beijing  100094,China
  • Received:2022-01-17 Revised:2022-02-17 Accepted:2022-03-23 Online:2022-04-13 Published:2022-04-12
  • Contact: Huichao DENG E-mail:denghuichao@buaa.edu.cn
  • Supported by:
    National Natural Science Foundation of China(91748201)

摘要:

随着人工智能技术的发展,无人机的应用场景趋向多元,人们对无人机的需求也不仅仅满足于简单的飞行任务,而是赋予其飞行机器人的角色,对其自主导航、复杂环境下的定位以及智能协同方面提出了更高的要求。针对室内场景下的定位需求,融合视觉与惯性数据实现了多旋翼飞行机器人的室内定位。在视觉前端加入图像增强算法以提高图像灰度对比度,减少了光流跟踪的误匹配点数。提出了一种基于图像信息的特征点提取和图像帧发布策略提高了定位精度,解决了室内环境下的定位漂移问题。针对飞行机器人室内自主跟踪及降落任务,设计了基于视觉定位的飞行机器人自主降落系统。在Gazebo中搭建飞行机器人模型仿真验证自主降落系统有效性,在EuRoC数据集下对定位算法进行对比评估,搭建飞行机器人平台在真实场景下进行室内定位实验,完成了室内场景下平台自主跟踪及降落任务,并采用运动捕捉系统获取的定位真值数据进行了误差分析,结果表明该定位技术满足室内场景下的自主跟踪及降落任务需求。

关键词: 飞行机器人, 室内定位, 自主跟踪及降落, 传感器数据融合, 视觉导航

Abstract:

With the development of artificial intelligence technology, the application scenarios of UAV tend to be diverse. People’s demand for UAV is not only satisfied with flight, but also endows it with the role of flying robot, imposing higher requirements for its autonomous navigation, positioning in complex environment and intelligent cooperation. According to the positioning requirements of indoor scene, the indoor positioning of multi rotor flying robot is realized by integrating vision and inertial data. Besides, an image enhancement algorithm is added to the visual front end to improve the gray contrast of the image. Aiming at the drift problem in visual-inertial fusion positioning of flying robot, a strategy of feature point extraction and image frame release based on image information is proposed to improve the positioning accuracy. Aiming at the indoor autonomous tracking and landing task of flying robot, a flying robot autonomous landing system based on visual positioning is designed. Moreover, a flying robot model is built in Gazebo to verify its effectiveness. The positioning algorithms are compared and evaluated under the EuRoC dataset. A flying robot platform is built in the real scene for indoor positioning experiments. The task of autonomous tracking and landing of ground platform in indoor scene is completed. The error analysis is carried out by using the positioning truth value provided by the motion capture system. The results show that the positioning technology can meet the requirements of autonomous tracking and landing tasks in indoor scenes.

Key words: flying robot, indoor positioning, autonomous tracking and landing, sensor data fusion, visual navigation

中图分类号: