首页 >

基于特征协同重构的RGB-T无人机目标跟踪-干扰环境下的无人机多源感知”专栏

高栋,赖普坚,王世磊,程塨   

  1. 西北工业大学
  • 收稿日期:2025-03-25 修回日期:2025-06-09 出版日期:2025-06-13 发布日期:2025-06-13
  • 通讯作者: 程塨
  • 基金资助:
    国家自然科学基金;陕西省杰出青年科学基金

RGB-T UAV Object Tracking Based on Feature-cooperative Reconstruction

  • Received:2025-03-25 Revised:2025-06-09 Online:2025-06-13 Published:2025-06-13
  • Contact: Gong Cheng

摘要: RGB-T无人机目标跟踪通过融合可见光(RGB)和热红外(TIR)模态的互补信息,以提升复杂环境下的目标跟踪鲁棒性。然而,现有的方法忽略了模态差异引起的噪声干扰,导致模态间特征互补的有效性受损,特征表征能力下降,从而制约了RGB-T无人机目标跟踪器的性能表现。针对上述问题,提出了一种基于特征协同重构的RGB-T无人机目标跟踪方法,该方法的核心是设计了一种由跨模态交互编码器和特征重构解码器组成的特征协同重构模块。具体地,跨模态交互编码器通过自适应特征交互机制,在提取辅助模态关键互补信息的同时有效抑制跨模态噪声干扰。随后,特征重构解码器利用编码器输出的查询特征引导模态特征重构,在保留模态特定特征的同时引入跨模态互补信息,增强特征表征能力。此外,为了提高动态场景下的目标定位精度,提出了一种跨模态位置线索融合模块,用于融合不同模态的搜索区域,从而提供更准确的位置线索。最后,在VTUAV和HiAL两个RGB-T无人机目标跟踪基准数据集和LasHeR数据集对所提方法进行了全面的实验评估。实验结果表明,所提方法在VTUAV和HiAL数据集上性能显著优于现有方法。特别地,在VTUAV数据集上,相比HMFT,本文方法的跟踪成功率和准确率分别提升了9.9%和9.0%。

关键词: RGB-T无人机目标跟踪, Transformer, 跨模态特征交互, 特征协同重构, 跨模态位置线索融合

Abstract: RGB-T Unmanned Aerial Vehicle (UAV) object tracking enhances tracking robustness in complex environments by fusing comple-mentary information from visible (RGB) and thermal infrared (TIR) modalities. However, existing methods neglect the noise inter-ference caused by modality gaps, which weakens the effectiveness of cross-modal feature complementarity and degrades the power of feature representation, limiting the performance of RGB-T UAV trackers. To address this issue, a feature-cooperative reconstruc-tion-based tracker is proposed. The core of the proposed method is to develop a feature-cooperative reconstruction module, consist-ing of a cross-modal interaction encoder and a feature reconstruction decoder. Specifically, the cross-modal interaction encoder em-ploys an adaptive feature interaction strategy to extract critical complementary information from the auxiliary modality while effec-tively suppressing cross-modal noise interference. The feature reconstruction decoder then utilizes the query features from the encod-er to guide the reconstruction of features, preserving modality-specific information while incorporating cross-modal complementary details, thereby enhancing feature representation. Additionally, to enhance target localization accuracy in dynamic scenes, a cross-modal location cue fusion module is proposed to integrate search regions from different modalities, thereby providing more precise localization cues. Finally, extensive experimental evaluations on two RGB-T UAV object tracking benchmark datasets (i.e., VTUAV and HiAL) as well as the LasHeR dataset are conducted. The results demonstrate that the proposed method significantly outperforms existing methods. Concretely, the proposed method achieves the improvements of 9.9% in success rate and 9.0% in precision, re-spectively, compared to HMFT on the VTUAV dataset.

Key words: RGB-T Object Tracking, Transformer, Cross-modal Feature Interaction, Feature-Cooperative Reconstruction, Cross-modal Location Cue Fusion

中图分类号: