导航

ACTA AERONAUTICAET ASTRONAUTICA SINICA ›› 2022, Vol. 43 ›› Issue (1): 324854-324854.doi: 10.7527/S1000-6893.2020.24854

• Electronics and Electrical Engineering and Control • Previous Articles     Next Articles

Cooperative object detection in UAV-based vision-guided docking

WANG Hui1,2, JIA Zikai1,2, JIN Ren1,2, LIN Defu1,2, FAN Junfang3, XU Chao1,4   

  1. 1. School of Aerospace Engineering, Beijing Institute of Technology, Beijing 100081, China;
    2. Beijing Key Laboratory of UAV Autonomous Control, Beijing Institute of Technology, Beijing 100081, China;
    3. Beijing Key Laboratory of High-Dynamic Navigation Technology, Beijing Information Science and Technology University, Beijing 100085, China;
    4. Beijing Institute of Special Mechanic-Electric, Beijing 100012, China
  • Received:2020-10-12 Revised:2020-12-10 Online:2022-01-15 Published:2020-12-03
  • Supported by:
    National Natural Science Foundation of China(U1613225); Funded Project of Beijing Key Laboratory of High Dynamic Navigation Technology of Beijing Information Science and Technology University(HDN2020105); Open Fund Project of Beijing Key Laboratory of High Dynamic Navigation Technology of Beijing Information Science and Technology University(HDN2020101)

Abstract: Autonomous aerial recovery of UAV is a future development trend, and automatic detection of aerial vehicles is one of the key technologies to realize vision-guided recovery. At present, the research on the detection of aerial related objects is limited to individual objects, and the information between correlated objects is not fully utilized. For the problem of related object detection in high-dynamic aerial docking, this paper proposes a single-stage fast cooperative algorithm for detection of the master and the mount, including detection of sibling independent head of related category, detection of mask enhancement of related category, and constraints on consistency of features of related categories. These modules can improve the detection performance jointly. Experiments show that in the test dataset, the algorithm can obtain a 4.3% increase of the average precision of compared with YOLOv4, and can obtain a 31.6% increase of the average precision compared with YOLOv3-Tiny. At the same time, this algorithm has been applied to the high dynamic aerial docking project of MBZIRC2020 to achieve online real-time processing of airborne images, and our team won the championship.

Key words: UAV autonomous recovery, visual guided, object detection, related objects, convolutional neural networks

CLC Number: