航空学报 > 2025, Vol. 46 Issue (11): 531281-531281   doi: 10.7527/S1000-6893.2024.31281

基于单目视觉与测距信息的无人机集群定位方法

李坤1,2, 布树辉1,2(), 李佳朋1,2, 王俱博玺1,2, 韩鹏程1,2, 李霄翰1,2, 李浩玮1,2   

  1. 1.西北工业大学 航空学院,西安 710072
    2.飞行器基础布局全国重点实验室,西安 710072
  • 收稿日期:2024-09-27 修回日期:2024-10-17 接受日期:2024-11-22 出版日期:2024-12-12 发布日期:2024-11-29
  • 通讯作者: 布树辉 E-mail:bushuhui@nwpu.edu.cn
  • 基金资助:
    国家资助博士后研究人员计划(GZB20240986)

UAV swarm positioning method based on monocular vision and ranging information

Kun LI1,2, Shuhui BU1,2(), Jiapeng LI1,2, Juboxi WANG1,2, Pengcheng HAN1,2, Xiaohan LI1,2, Haowei LI1,2   

  1. 1.School of Aeronautics,Northwestern Polytechnical University,Xi’an 710072,China
    2.National Key Laboratory of Aircraft Configuration Design,Xi’an 710072,China
  • Received:2024-09-27 Revised:2024-10-17 Accepted:2024-11-22 Online:2024-12-12 Published:2024-11-29
  • Contact: Shuhui BU E-mail:bushuhui@nwpu.edu.cn
  • Supported by:
    Postdoctoral Fellowship Program of CPS(GZB20240986)

摘要:

无人机集群在低空经济中发挥着关键作用,是推动该领域发展的重要力量。集群定位信息是实现无人机任务协同、资源优化和高效调度的基础,为低空经济的可持续发展提供了支持。在复杂的工作环境中,全球卫星导航系统(GNSS)信号可能受到干扰,使无人机集群难以获取定位信息,进而导致集群丧失协同工作能力。为解决GNSS拒止环境下无人机集群的定位问题,提出了1种融合单目视觉与测距信息的无人机集群定位方法。该方法通过视觉里程计(VO)实现集群内各无人机的自主定位,并设计了1种仅传输必要信息的通信模式,包括视觉关键帧、位姿帧及地图点等信息,以较低的通信带宽将这些数据发送至中心服务器。提出了位姿帧的概念,解决了关键帧不能与测距信息融合的问题——中心服务器基于不同定位地图的关键帧间的共视关系或位姿帧与其对应的测距信息的约束关系将不同的地图对齐,并基于测距信息及视觉信息,将对齐后的地图融合、优化,最终实现集群定位功能。中心服务器完成全局优化后,将修正后的关键帧及地图点信息回传给无人机VO的局部地图,进一步提高了无人机定位精度。通过仿真与实验的方法对提出的定位方法进行验证,实验结果表明:本文方法估计的无人机集群绝对定位误差为0.49 m,定位精度高于当前主流视觉定位方案;估计的无人机集群定位尺度误差为3.2%,解决了单目视觉定位的尺度混淆问题。本方法能够仅基于无人机间的距离信息实现集群定位,消除了集群航迹对共视要求的依赖,为复杂工作场景下无人机集群工作提供定位数据支撑。

关键词: 无人机集群, 视觉定位, 距离测量, 集群定位, 图优化

Abstract:

Unmanned Aerial Vehicle (UAV) swarms play a pivotal role in the low-altitude economy. Accurate swarm positioning information underpins mission coordination, resource optimization, and efficient scheduling among drones, thereby facilitating the sustainable advancement of the low-altitude economy. In complex environments, however, Global Satellite Navigation System (GNSS) signals may be disrupted, rendering it difficult for UAV swarms to obtain accurate positioning data and compromising their ability to function collaboratively. To address this challenge in GNSS-denied environments, this paper presents a UAV swarm positioning method that integrates monocular vision and ranging information. Visual Odometry (VO) is employed to enable autonomous positioning for each UAV within the swarm. A communication framework is designed to transmit only essential data, including visual keyframes, pose frames, and map points, to the central server, thus reducing communication bandwidth. The concept of pose frame is introduced to address the limitation that keyframes cannot fuse with ranging information. The central server aligns maps from different UAVs based on the common view relationships between keyframes or the constraints between pose frames and their corresponding ranging data. The server then fuses and optimizes these maps using both visual and ranging information, achieving accurate swarm positioning. After global optimization, the server sends the corrected keyframe and map point data back to the local map of UAV’s VO to further enhance positioning accuracy. The proposed method is validated through simulations and experiments. Results demonstrate that the swarm positioning error is reduced to 0.49 m, outperforming current state-of-the-art visual positioning methods. Additionally, the scale error is reduced to 3.2%, effectively resolving the problem of scale ambiguity inherent in monocular visual positioning. This method proposed enables precise UAV swarm positioning based solely on inter-UAV ranging information, eliminating the need for shared visual features, and providing robust positioning data for UAV swarms operating in complex environments.

Key words: UAV swarm, visual positioning, range measurement, swarm positioning, graph optimization

中图分类号: