航空学报 > 2022, Vol. 43 Issue (12): 326296-326296   doi: 10.7527/S1000-6893.2021.26296

多特征融合的月面采样遥操作视觉定位方法

刘传凯1,2,3, 李东升3, 谢剑锋1, 雷俊雄3, 袁春强1,2, 何锡明1   

  1. 1. 北京航天飞行控制中心, 北京 100190;
    2. 航天飞行动力学技术重点实验室, 北京 100190;
    3. 江西理工大学 电气工程与自动化学院, 赣州 341000
  • 收稿日期:2021-09-01 修回日期:2021-09-22 发布日期:2021-11-10
  • 通讯作者: 刘传凯,E-mail:ckliu2005@126.com E-mail:ckliu2005@126.com
  • 基金资助:
    国家自然科学基金(61972020);装备预研国防科技重点实验室基金(19KY1213,19NY1208)

Multi-feature fusion based vision locating method for lunar surface sampling teleoperation

LIU Chuankai1,2,3, LI Dongsheng3, XIE Jianfeng1, LEI Junxiong3, YUAN Chunqiang1,2, HE Ximing1   

  1. 1. Beijing Aerospace Control Center, Beijing 100190, China;
    2. Nation Key Laboratory of Science and Technology on Aerospace Flight Dynamics, Beijing 100190, China;
    3. School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou 341000, China
  • Received:2021-09-01 Revised:2021-09-22 Published:2021-11-10
  • Supported by:
    National Natural Science Foundation of China (61972020); Fund for National Defense Science and Technology Key Laboratory of Equipment Pre Research (19KY1213,19NY1208)

摘要: 嫦娥五号月面采样任务中,采样机械臂受到臂杆与关节柔性的影响其末端存在操作误差,需要依靠着陆器和机械臂携带多个视觉相机的定位引导,才能实现采样、放样、样品罐抓取与转移放置等精确操作。针对上述需求,本文设计了基于着陆器监视相机和机械臂末端臂载相机相结合的定位方案,提出了基于圆形目标自然特征和靶标特征结合的视觉定位方法,建立了基于成像光束逆投影的多特征融合视觉定位统一框架,分析了不同工况条件下机械臂操作误差对视觉定位收敛性的影响,设计了不同类特征的同尺度误差模型及其迭代优化策略,针对不同操作应用场景实现了基于单个或多个相机的不同类型特征组合定位。经试验分析验证,采样操作中的双目椭圆视觉定位精度优于2 cm,放样与抓罐等操作中的圆形目标与靶标组合特征视觉定位的精度优于2 mm,达到了机械臂精确操作引导的要求,成功支撑了嫦娥五号月面采样封装任务的实施。

关键词: 嫦娥五号, 视觉定位, 多特征融合, 机械臂控制, 遥操作

Abstract: In lunar surface sampling by Chang'e 5, it is difficult to use the slender and flexible manipulator to achieve precise operations, such as digging, putting the sample into the pot, pot grasping and assembly, etc. In this paper, a vision system, consisting of static cameras installed on the lander and dynamic cameras mounted on the manipulator, is developed to get images and relative positions of the end-effector relative to the targets, so as to guide precise operations. We propose a novel positioning method by combining two kinds of features observed in multiple cameras, the circular objects and drones, and build a unified framework for calculating the relative poses between the end-effector and the targets in different operations. The influences of the manipulator error on the convergence of visual positioning are analysed, and several applications with combinations of different features and cameras are realized for operations of digging, sample putting, etc. Experimental validation and analysis show that the accuracy of binocular ellipses positioning in digging is better than 2 cm, and the accuracy of positioning with circular objects and drones in sample putting is better than 2 mm. The results meet the requirements of various operations, and the method proposed successfully supported implementation of the Chang'e 5 lunar surface sampling and packaging mission.

Key words: Chang'e 5, visual positioning, multi-feature fusion, robotic arm control, teleoperation

中图分类号: