低空遥感场景下的船舶目标关联技术是推动海上监测及其智能感知系统发展的重要支撑。然而,现有方法多直接迁移行人或车辆重识别算法,难以有效应对船舶图像中的特有问题,尤其是因无人机等低空遥感平台成像视角多变导致的类内差异大、局部信息缺失等挑战,这往往导致同一船舶目标出现异常样本,极大影响关联精度。为了解决上述问题,本文提出一种基于多尺度相关性Transformer网络的船舶目标关联算法。与现有方法不同,该算法能够同时对输入图像集合进行多尺度显式的全局和局部相关性建模,且在模型训练时,不只依赖单幅图像的孤立特征进行学习,而是融合利用图像间的互补信息,抑制由类内差异或局部缺失引起的异常样本影响。具体而言,本文设计了全局关联模块,构建完整输入图像间的全局相似性关联矩阵,基于图像间一致性进行特征聚合,实现显式全局相关性建模;同时设计了局部关联模块,构建一个基于动态更新机制的记忆库,挖掘并对齐正样本的局部特征,通过上下文相似性提取局部相关性。在四个公开实测数据集上的实验结果表明,本文所提方法在目标关联准确度的性能指标上均优于现有主流算法,验证了其有效性、鲁棒性与工程实用潜力。
Vessel target association under low-altitude remote-sensing scenarios is a crucial component supporting the de-velopment of maritime monitoring and intelligent perception systems. However, most existing approaches directly migrate pedestrian or vehicle re-identification algorithms, which fail to effectively handle the unique challenges of vessel imagery—particularly the large intra-class variations and local information loss caused by the diverse imag-ing perspectives of UAV-based low-altitude imaging platforms. These issues often lead to outlier samples within the same vessel identity, significantly degrading association accuracy. To overcome these limitations, this paper pro-poses a Multi-scale Correlation-aware Transformer network (MCFormer) for vessel target association. Unlike con-ventional methods that learn from isolated features of single images, MCFormer performs explicit global and local correlation modeling across multi-scale image collections, leveraging inter-image complementary information to suppress the effects of intra-identity variance and partial occlusion. Specifically, a Global Correlation Module (GCM) constructs a comprehensive inter-image similarity matrix to achieve explicit global correlation modeling through consistency-based feature aggregation, while a Local Correlation Module (LCM) builds a dynamically updated memory bank to mine and align positive local features, capturing fine-grained contextual correlations. Experiments conducted on four publicly available real-world datasets demonstrate that the proposed method consistently out-performs mainstream algorithms in both mAP and Rank-n metrics, verifying its effectiveness, robustness, and en-gineering potential.