适应着舰引导大距离跨度的高精度单目视觉位姿测量

  • 陈霖 ,
  • 顾曦文 ,
  • 陈知颖 ,
  • 张倬 ,
  • 孙晓亮
展开
  • 1. 国防科技大学
    2. 中国人民解放军91351部队
    3. 国防科技大学空天科学学院

收稿日期: 2024-11-25

  修回日期: 2025-01-24

  网络出版日期: 2025-02-21

基金资助

国家自然科学基金

High-Precision Monocular Vision Pose Measurement for Large Distance Span in Carrier Landing Guidance

  • CHEN Lin ,
  • GU Xi-Wen ,
  • CHEN Zhi-Ying ,
  • ZHANG Zhuo ,
  • SUN Xiao-Liang
Expand

Received date: 2024-11-25

  Revised date: 2025-01-24

  Online published: 2025-02-21

摘要

自主着舰引导距离跨度大,使得机载单目视觉引导获取的图像序列中舰船目标尺度变化大,已有位姿测量方法难以实现覆盖大距离跨度的高精度单目视觉位姿测量。对于已有基于稀疏关键点集合的单目视觉位姿测量方法,本文从提升关键点检测精度出发,分析目标尺寸、网络输入尺寸对关键点检测精度的影响规律。在兼顾精度和效率的前提下,本文提出一种新的基于多部件的单目视觉位姿测量方法,采用稀疏关键点集合对部件进行简化表示,在基于舰船目标整体部件的粗略位姿估计的基础上,引入路径聚合特征金字塔网络和分层编码模块,实现局部部件关键点高精度检测,进一步,综合各部件高精度关键点检测结果,通过求解Perspective-n-Points(PnP)问题,实现覆盖着舰引导大距离跨度范围内的鲁棒、高精度位姿测量。仿真实验及缩比实物实验的结果表明,本文方法实现了着舰引导大距离跨度范围内鲁棒、高精度单目位姿测量,性能优于已有方法,在嵌入式平台上的平均单帧推理时间约为40ms。

本文引用格式

陈霖 , 顾曦文 , 陈知颖 , 张倬 , 孙晓亮 . 适应着舰引导大距离跨度的高精度单目视觉位姿测量[J]. 航空学报, 0 : 1 -0 . DOI: 10.7527/S1000-6893.2025.31568

Abstract

The autonomous landing guidance involves a large distance span, resulting in significant scale variations of the ship target in the image sequences obtained through monocular vision guidance. Existing pose measurement methods struggle to achieve high-precision monocular vision pose measurement across such a wide distance range. For cur-rent monocular vision pose measure-ment methods based on sparse keypoint sets, this paper focuses on improving the accuracy of keypoint detection and analyzes the impact of target size and network input size on keypoint detection accuracy. Furthermore, this paper proposes a novel mo-nocular vision pose measurement method based on multiple components, balancing both accuracy and efficiency. By using sparse keypoint sets to represent components in a simplified manner, and building on a coarse pose estimation of the overall ship target components, the method intro-duces a path aggregation feature pyramid network and a hierarchical encoding module to achieve high-precision de-tection of local component keypoints. Subsequently, by integrating the high-precision keypoint detec-tion results of all components and solving the Perspective-n-Points (PnP) problem, the method achieves robust and high-precision pose measurement across the large distance span required for landing guidance. Simulation experiments and scaled physical experiments demonstrate that the proposed method achieves robust and high-precision monocular pose measurement across the large distance span for landing guidance, outperforming existing methods. The average in-ference time per frame on an embedded platform is approximately 40 ms.

参考文献

[1] WICKRAMASURIYA M, LEE T, SNYDER M. Deep Monocular Relative 6D Pose Estimation for Ship-Based Autonomous UAV[C]//AIAA SCITECH 2024 Forum. 2024: 2877.
[2] 胡小兵,周大鹏,曲晓雷.国外舰载机全自动着舰技术综述[J].飞机设计,2021,41(02):32-36.
HU X B,ZHOU D P,QU X L.Review on full au-tomatic carrier landing technique of foreign shipboard aircraft[J]. Aircraft Design, 2021, 41(2): 32-36 (in Chinese).
[3] 甄子洋,王新华,江驹,等.舰载机自动着舰引导与控制研究进展[J].航空学报,2017,38(02):127-148.
ZHEN Z Y, WANG X H, JIANG J, et al. Research progressin guidance and control of automatic carrier landing of carrier-based aircraft[J]. Acta Aeronautica et Astronautica Sinica, 2017, 38(2): 127-148 (in Chinese).
[4] 张志冰,甄子洋,江驹,等.舰载机自动着舰引导与控制综述[J].南京航空航天大学学报,2018,50(06):734-744.
ZHANG Z B, ZHEN Z Y, JIANG J, et al. Research progressin guidance and control of automatic carrier landing of carrier-based aircraft[J].Journal of Nanjing University of Aeronautics & Astronautics, 2018, 50(06): 734-744 (in Chinese).
[5] 魏振忠.舰载机着舰位姿视觉测量技术概述[J].测控技术,2020,39(08):2-6.
WEI Z Z. Overview of visual measurement technology for landing position and attitude of carrier-based air-craft[J]. Measurement & Control Technology, 2020, 39(8): 2-6 (in Chinese).
[6] GUI Y, Guo P, ZHANG H, et al. Airborne vision-based navigation method for UAV accuracy landing using in-frared lamps[J]. Journal of Intelligent & Robotic Sys-tems, 2013, 72(2): 197-218.
[7] POLVARA R, SHARMA S, WAN J, et al. Towards autonomous landing on a moving vessel through fidu-cial markers[C]//2017 European Conference on Mobile Robots(ECMR), Paris, France, 2017:1-6.
[8] XU G, QI X, ZENG Q, et al. Use of land’s cooperative object to estimate UAV’s pose for autonomous land-ing[J]. Chinese Journal of Aeronautics, 2013, 26(6):1498-1505.
[9] LEPETIT V, MORENO-NOGUER F, FUA P. EPnP: an accurate O(n) solution to the PnP problem[J].International Journal of Computer Vision, 2009, 81(2):155-166.
[10] ZHOU L, KOPPEL D, KAESS M. A complete, accu-rate and efficient solution for the perspective-n-line problem[J]. IEEE Robotics and Automation Letters, 2020, 6(2): 699-706.
[11] 毕道明,黄辉,范静,等.视觉着舰中非合作结构化特征匹配算法[J].南京航空航天大学学报,2021,53(03):395-401.
BI D M, HUANG H, FAN J, et al. Non?cooperative structural feature matching algorithm in visual land-ing[J].Journal of Nanjing University of Aeronautics & Astronautics, 2021, 53(03): 395-401 (in Chinese).
[12] 王秋富,石治国,张倬,等.舰载机着舰引导中鲁棒单目视觉相对位姿测量[J/OL].航空学报, (2024-06-25)[2024-10-20]. https://hkxb.buaa.edu.cn/CN/10.7527/S1000-6893.2024.30309
WANG Q F, SHI Z G, ZHANG Z, et al. Robust monocular relative pose measurement for carrier-based aircraft landing guidance[[J/OL]. Acta Aeronautica et Astronautica Sinica, (2024-06-25)[2024-10-20]. https://hkxb.buaa.edu.cn/CN/10.7527/S1000-6893.2024.30309. (in Chinese).
[13] SUN X L, ZHANG Z, LIU J, et al. Visual pose meas-urement for automatic landing on an aircraft carri-er[C]//2022 IEEE International Conference on Un-manned Systems (ICUS). IEEE, 2022: 891-895.
[14] ZHANG Z, W Q F, BI D M, et al. MC-LRF based pose measurement system for shipborne aircraft auto-matic landing[J]. Chinese Journal of Aeronautics, 2023, 36(8): 298-312.
[15] WICKRAMASYRIYA M, LEE T, SNYDER M. Deep Transformer Network for Monocular Pose Estimation of Ship-Based UAV[J]. arXiv preprint arXiv:2406.09260, 2024.
[16] YU G H, CHANG Q Y, LV W Y, et al. PP-PicoDet: A better real-time object detector on mobile devices[EB/OL]. arXiv:2111.00902[2024-10-20]. https://doi.org/10.48550/arXiv.2111.00902.
[17] TAO J, LU P, ZHANG L, et al. Rtmpose: Real-time multi-person pose estimation based on mmpose[EB/OL]. arXiv:2303.07399[2024-10-20]. https://doi.org/10.48550/arXiv.2303.07399.
[18] FISCHLER M A, BOLLES R C. Random sample con-sensus[J]. Communications of the ACM, 1981, 24(6):381-395.
[19] LEPETIT V, MORENO N F, FUA P. EPnP: An accurate O(n) solution to the PnP problem[J]. International Jour-nal of Computer Vision, 2008, 81(2): 155-166.
[20] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8759-8768.
[21] RAZAVI A, VAN den Oord A, VINYALS O. Generating diverse high-fidelity images with vq-vae-2[J]. arXiv preprint: 1906.00446, 2019.
[22] LI Y, YANG S, LIU P, et al. Simcc: A simple coordinate classification perspective for human pose estima-tion[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 89-106.
[23] GUO L, CHEN L, WANG Q, et al. Joint Optimization of the 3D Model and 6D Pose for Monocular Pose Es-timation[J]. Drones, 2024, 8(11): 626.
[24] CONTRIBUTORS MMPose. Openmmlab pose estimation toolbox and benchmark[EB/OL]. 2020(2020-8)[2024-10-21]. https://github.com/open-mmlab/mmpose.
[25] DENNINGER M, SUNDERMEYER M, WINKELBAUER D, et al. Blenderproc[J]. arXiv preprint: 1911.01911, 2019.
文章导航

/