在GNSS拒止环境下,飞行器视觉导航面临夜间跨模态图像差异显著和大视场旋转导致特征匹配失效的双重挑战。为此,本文提出了一种基于图像翻译飞行器红外/卫星异源快速匹配定位方法,旨在通过图像翻译与预旋转匹配策略提升飞行器夜间匹配定位的性能。首先,构建了HC_CycleGAN模型,通过结合Huber损失与空间注意力机制实现卫星影像至红外域的跨模态转换,解决异源图像的模态对齐问题。其次,设计了FastPoint快速特征点提取网络,基于设计的深度可变卷积层,结合残差学习机制构建了R-DVM计算单元,以提高模型计算效率及训练稳定性。最后提出基于L-LightGlue动态自适应匹配算法的旋转匹配方法,结合几何不变预旋转匹配策略进行旋转匹配校正,利用图像间变换关系确定飞行器在卫星图像中的位置,完成视觉定位。实验结果表明,相比于其他匹配定位方法,本文所提方法在提高匹配效率的同时,能够显著提升大视场旋转条件下的红外/卫星异源图像匹配定位的精度。
In GNSS-denied environments, aircraft visual navigation faces dual challenges posed by significant nighttime cross-modal image discrepancies and feature matching failures caused by large field-of-view rotations. To address these issues, this paper proposes an infrared/satellite cross-modal fast matching localization method for aircraft based on image translation, aiming to enhance nighttime matching and localization performance through image translation and rotational matching strategies. First, we construct HC_CycleGAN, a cross-modal translation model that leverages the integration of Huber loss and spatial attention mechanisms to align satellite and infrared modalities. Second, we develop FastPoint, a rapid feature extraction network that integrates a deep variable convolutional layer with residual learning mechanisms to establish an R-DVM computational unit, enhancing both computational efficiency and training stability. Finally, a rotational matching method based on the L-LightGlue dynamic adaptive matching algorithm is proposed. This method combines a geometry-invariant pre-rotation matching strategy for rotational matching correction, leveraging inter-image transformation relationships to determine the aircraft's position in satellite imagery and accomplish visual localization. Experimental results demonstrate that, compared to existing matching and localization methods, the proposed approach not only im-proves matching efficiency but also significantly reduces cross-modal image matching errors under rotational conditions.