首页 >

基于低秩特征增强的飞行器景象匹配定位方法

王永海1,何琪彬2,郭灵犀3,陈超3,薛晗庆1   

  1. 1. 中国运载火箭技术研究院
    2. 临近空间物理重点实验室
    3. 空间物理重点实验室
  • 收稿日期:2026-02-04 修回日期:2026-03-31 出版日期:2026-04-02 发布日期:2026-04-02
  • 通讯作者: 何琪彬

Low-rank feature-enhanced scene matching for aircraft localization

  • Received:2026-02-04 Revised:2026-03-31 Online:2026-04-02 Published:2026-04-02
  • Contact: Qibin HE

摘要: 景象匹配是解决卫星导航拒止环境下飞行器自主定位问题的关键技术,对于提升飞行器在视觉结构丰富区域的定位能力、支撑其在临近空间等高动态环境中的可靠应用具有重要价值。现有基于深度学习的方法难以有效区分图像中稳定本征结构与瞬时干扰噪声,导致其在面对剧烈视角、季节、模态等复杂域变化时泛化能力不足,且缺乏明确物理先验引导以保障鲁棒性。为此,本文提出了一种低秩特征增强的景象匹配方法。通过将低秩先验嵌入深度神经网络,构建了端到端的低秩特征增强网络(Low-rank Feature Enhancement Network,LFE-Net)框架,利用Schatten-p 范数损失隐式约束模型 聚焦于场景稳定结构,并结合多任务学习提升泛化性能。实验表明,该方法在飞行器景象匹配数据集上取得了更高的平均定位精度,对复杂域变化表现出强鲁棒性。

关键词: 景象匹配, 视觉地理定位, 端到端学习, 低秩特征增强

Abstract: Scene matching is a key technology for solving the problem of autonomous positioning of aircraft in satellite navigation denied environments. It is of great value for improving the positioning capability of aircraft in visually rich regions and supporting their reliable application in highly dynamic environments such as near space. Existing deep learning-based methods struggle to effectively distinguish stable intrinsic structures from transient noise in images, resulting in insufficient generalization ability when faced with complex domain changes, e.g., drastic changes in viewpoint, season, and modality. Furthermore, they lack explicit physical prior guidance to ensure robustness. As a consequence, this paper proposes a low-rank feature enhancement-based scene matching method. By embedding low-rank priors into a deep neural network, an end-to-end Low-rank Feature Enhancement Network (LFE-Net) framework is constructed. The Schatten-p norm loss implicit constraint model focuses on stable scene structures, and multi-task learning is combined to improve generalization performance. Experiments show that this method achieves higher average localization accuracy on aircraft scene matching datasets and exhibits strong robustness to complex domain changes.

Key words: scene matching, visual geo-location, end-to-end learning, low-rank feature enhancement