在低空遥感实时巡检中,受强视差、尺度变化与局部畸变影响,传统“先拼接后检测”的串联范式会产生特征冗余、引发误差级联,难以统一全局与局部几何变换,进而导致速度受限、鲁棒性不足。为此,本文提出一种基于共享骨干的快速双分支拼接同步检测框架FSDNet(Fast Stitching and Detection Network),该框架以预训练检测主干作为统一编码器,在拼接分支中嵌入注意力引导的局部上下文关联模块,从共享特征中显式回归细粒度几何流场,并设置全局单应粗配准与局部薄板样条细配准两条分支协同估计变换场,结合变换引导的检测框校正与强度自适应融合以提升几何—语义一致性。在UDIS-D与Warped AU-AIR上的实验证明,相比典型串联基线,在保持高质量拼接的同时,本方法每秒帧数FPS(Frames Per Second)提升约66%,并在Warped AU-AIR上获得领先目标检测性能,验证了该方法的高效性与实用性。
In low-altitude remote sensing for real-time inspection, strong parallax, scale variations, and local distortions make the traditional “stitch-then-detect” serial paradigm prone to feature redundancy and error cascading, and make it difficult to unify global and local geometric transformations, which in turn limits speed and weakens robustness. To address this, we propose FSDNet(Fast Stitching and Detection Network), a fast dual-branch stitching and synchronous detection framework built on a shared backbone. The framework adopts a pretrained detection backbone as a unified encoder and embeds an attention-guided local context correlation module in the stitching branch to explicitly regress fine-grained geometric flow fields from shared features. In addition, two collaborative branches are designed for estimating the transformation fields: a global homography branch for coarse alignment and a local thin-plate spline branch for fine alignment. These are combined with transformation-guided detection box rectification and intensity-adaptive fusion to enhance geometric–semantic consistency. Experiments on UDIS-D and Warped AU-AIR demonstrate that, while maintaining high-quality stitching, the proposed method improves Frames Per Second (FPS) by about 66% compared with typical serial baselines and achieves superior object detection performance on Warped AU-AIR, validating the efficiency and practicality of the approach.