导航

Acta Aeronautica et Astronautica Sinica ›› 2025, Vol. 46 ›› Issue (15): 331568.doi: 10.7527/S1000-6893.2025.31568

• Electronics and Electrical Engineering and Control • Previous Articles    

High-precision monocular vision pose measurement for large distance span in carrier landing guidance

Lin CHEN1,2, Xiwen GU3, Zhiying CHEN1,2, Zhuo ZHANG1,2, Xiaoliang SUN1,2()   

  1. 1.College of Aerospace Science and Engineering,National University of Defense Technology,Changsha 410073,China
    2.Hunan Province Key Laboratory of Image Measurement and Vision Navigation,National University of Defense Technology,Changsha 410073,China
    3.91351 Troops,Xingcheng 125106,China
  • Received:2024-11-25 Revised:2024-12-17 Accepted:2025-01-20 Online:2025-02-24 Published:2025-02-21
  • Contact: Xiaoliang SUN E-mail:alexander_sxl@nudt.edu.cn
  • Supported by:
    National Natural Science Foundation of China(12272404)

Abstract:

The autonomous landing guidance involves a large distance span, resulting in significant scale variations of the ship target in the image sequences obtained through monocular vision guidance. Existing pose measurement methods struggle to achieve high-precision monocular vision pose measurement across such a wide distance range. For current monocular vision pose measurement methods based on sparse keypoint sets, this paper focuses on improving the accuracy of keypoint detection, and analyzes the impact of target size and network input size on keypoint detection accuracy. Furthermore, this paper proposes a novel monocular vision pose measurement method based on multiple components, balancing both accuracy and efficiency. By using sparse keypoint sets to represent components in a simplified manner, and building on a coarse pose estimation of the overall ship target components, this method introduces a path aggregation feature pyramid network and a hierarchical encoding module to achieve high-precision detection of local component keypoints. Subsequently, by integrating the high-precision keypoint detection results of all components and solving the Perspective-n-Points (PnP) problem, the method achieves robust and high-precision pose measurement across the large distance span required for landing guidance. Simulation experiments and scaled physical experiments demonstrate that the proposed method achieves robust and high-precision monocular pose measurement across the large distance span for landing guidance, outperforming existing methods, with an average single-frame inference time of approximately 40 ms on embedded platforms.

Key words: monocular, landing guidance, pose measurement, deep learning, keypoint detection

CLC Number: