Reviews

Some achievements on detection methods of UAV autonomous landing markers

  • ZHAO Liangyu ,
  • LI Dan ,
  • ZHAO Chenyue ,
  • JIANG Fei
Expand
  • 1. School of Aerospace Engineering, Beijing Institute of Technology, Beijing 100081, China;
    2. Institute of People's Armed Police, Beijing 100012, China

Received date: 2021-05-31

  Revised date: 2021-06-21

  Online published: 2021-07-09

Supported by

National Natural Science Foundation of China (12072027, 11532002)

Abstract

To further promote the research and development on sea/land-air cross-domain collaborative technology in China, the main research achievements and the latest progresses of autonomous landing marker detection methods for the Unmanned Aerial Vehicle (UAV) are reviewed. Firstly, following the introduction of vision guided UAV autonomous landing, the marker detection methods based on image segmentation, classifier and deep learning are discussed. Secondly, the overseas and domestic research teams and achievements of autonomous landing of UAV on static and moving platforms such as vehicles and ships are introduced. Landing markers and detection methods used by the teams are summarized. Finally, a number of key technical issues and feasible solutions for further investigations are discussed in terms of marker detection on moving platforms and in complex environments, system software algorithms, hardware equipment and multi-sensor fusion. How to overcome the dependence on artificial marker and use deep learning ideas for detection of safe landing areas in non-cooperative environments in the future are also discussed.

Cite this article

ZHAO Liangyu , LI Dan , ZHAO Chenyue , JIANG Fei . Some achievements on detection methods of UAV autonomous landing markers[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2022 , 43(9) : 25882 -025882 . DOI: 10.7527/S1000-6893.2021.25882

References

[1] ZHAO L Y, ZHU Y Q, JIN R. Review of monocular V-SLAM for multi-rotor unmanned aerial vehicle[J]. Aero Weaponry, 2020, 27(2): 1-14 (in Chinese). 赵良玉, 朱叶青, 金瑞. 多旋翼无人机单目V-SLAM研究综述[J]. 航空兵器, 2020, 27(2): 1-14.
[2] WU P F, SHI Z S, WU Z H, et al. Trajectory tracking and control for unmanned helicopter 's autonomous landing on ship[J]. Systems Engineering and Electronics, 2019, 41(11): 2573-2580 (in Chinese). 吴鹏飞, 石章松, 吴中红, 等. 无人直升机自主着舰轨迹跟踪控制[J]. 系统工程与电子技术, 2019, 41(11): 2573-2580.
[3] ZHEN Z Y, WANG X H, JIANG J, et al. Research progress in guidance and control of automatic carrier landing of carrier-based aircraft[J]. Acta Aeronautica et Astronautica Sinica, 2017, 38(2): 020435 (in Chinese). 甄子洋, 王新华, 江驹, 等. 舰载机自动着舰引导与控制研究进展[J]. 航空学报, 2017, 38(2): 020435.
[4] WILLIAMS K W. A summary of unmanned aircraft accident/incident data: Human factors implications: ADA-460102[R]. Virginia: Defense Technical Information Center, 2004.
[5] MANNING S D, RASH C E, LEDUC P A, et al. The role of human causal factors in U.S. army unmanned aerial vehicle accidents[R]. Defense Technical Information Center, 2004.
[6] DE CROON G C H E, HO H W, DE WAGTER C, et al. Optic-flow based slope estimation for autonomous landing[J]. International Journal of Micro Air Vehicles, 2013, 5(4): 287-297.
[7] XU X B, DUAN H B, ZENG Z G, et al. Progresses in UAV/USV cooperative control[J]. Aero Weaponry, 2020, 27(6): 1-6 (in Chinese). 徐小斌, 段海滨, 曾志刚, 等. 无人机/无人艇协同控制研究进展[J]. 航空兵器, 2020, 27(6): 1-6.
[8] LIU Y P. Research of vision-based UAV target detection and tracking and its autoland technique[D]. Wuhan: Huazhong University of Science and Technology, 2019: 57-71 (in Chinese). 刘玉盼. 基于视觉的无人机目标检测跟踪与自主降落技术研究[D]. 武汉: 华中科技大学, 2019: 57-71.
[9] SUO W K, HU W G, WU X S, et al. Research on autonomous landing of UAV based on optical vision[J]. Laser Journal, 2019, 40(4): 9-13 (in Chinese). 索文凯, 胡文刚, 伍锡山, 等. 基于光学视觉辅助无人机自主降落研究综述[J]. 激光杂志, 2019, 40(4): 9-13.
[10] LI Q, ZHANG S L, MENG W G. Surveys of carrier landing techniques for UAVs[J]. Unmanned Systems Technology, 2018, 1(2): 43-48 (in Chinese). 李强, 张淑丽, 蒙文巩. 国外舰载无人机着舰引导技术发展现状[J]. 无人系统技术, 2018, 1(2): 43-48.
[11] GAUTAM A, SUJIT P B, SARIPALLI S. A survey of autonomous landing techniques for UAVs[C]//2014 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2014: 1210-1218.
[12] ZHAO Y Q, RAO Y, DONG S P, et al. Survey on deep learning object detection[J]. Journal of Image and Graphics, 2020, 25(4): 629-654 (in Chinese). 赵永强, 饶元, 董世鹏, 等. 深度学习目标检测方法综述[J]. 中国图象图形学报, 2020, 25(4): 629-654.
[13] ZAIDI S S A, ANSARI M S, ASLAM A, et al. A survey of modern deep learning based object detection models[J]. Digital Signal Processing, 2022, 126: 103514.
[14] LI H G, YU R N, DING W R. Research development of small object traching based on deep learning[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(7): 024691 (in Chinese). 李红光, 于若男, 丁文锐. 基于深度学习的小目标检测研究进展[J]. 航空学报, 2021, 42(7): 024691.
[15] JIANG B, QU R K, LI Y D, et al. Object detection in UAV imagery based on deep learning: Review[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(4): 524519 (in Chinese). 江波, 屈若锟, 李彦冬, 等. 基于深度学习的无人机航拍目标检测研究综述[J]. 航空学报, 2021, 42(4): 524519.
[16] SRIVASTAVA S, NARAYAN S, MITTAL S. A survey of deep learning techniques for vehicle detection from UAV images[J]. Journal of Systems Architecture, 2021, 117: 102152.
[17] LIU F, WU Z W, YANG A Z, et al. Multi-scale feature fusion based adaptive object detection for UAV[J]. Acta Optica Sinica, 2020, 40(10): 1015002 (in Chinese). 刘芳, 吴志威, 杨安喆, 等. 基于多尺度特征融合的自适应无人机目标检测[J]. 光学学报, 2020, 40(10): 1015002.
[18] ZHAO L Y, CHENG Z K, GAO F J, et al. Several key technologies of unmanned aerial vehicle-unmanned surface vehicle cooperative autonomous landing[J]. Shipbuilding of China, 2020, 61(S1): 156-163 (in Chinese). 赵良玉, 程喆坤, 高凤杰, 等. 无人机/艇协同自主降落的若干关键技术[J]. 中国造船, 2020, 61(S1): 156-163.
[19] JIN S G, ZHANG J Y, SHEN L C, et al. On-board vision autonomous landing techniques for quadrotor: A survey[C]//2016 35th Chinese Control Conference (CCC). Piscataway: IEEE Press, 2016: 10284-10289.
[20] KONG W W, ZHOU D L, ZHANG D B, et al. Vision-based autonomous landing system for unmanned aerial vehicle: A survey[C]//2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI). Piscataway: IEEE Press, 2014: 1-8.
[21] CHEN Y, LIU H L. Overview of landmarks for autonomous, vision-based landing of unmanned helicopters[J]. IEEE Aerospace and Electronic Systems Magazine, 2016, 31(5): 14-27.
[22] ZHEN Z Y. Research development in autonomous carrier-landing/ship-recovery guidance and control of unmanned aerial vehicles[J]. Acta Automatica Sinica, 2019, 45(4): 669-681 (in Chinese). 甄子洋. 舰载无人机自主着舰回收制导与控制研究进展[J]. 自动化学报, 2019, 45(4): 669-681.
[23] SHEN L C, KONG W W, NIU Y F. Ground-and ship-based guidance approaches for autonomous landing of UAV[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(2): 187-196 (in Chinese). 沈林成, 孔维玮, 牛轶峰. 无人机自主降落地基/舰基引导方法综述[J]. 北京航空航天大学学报, 2021, 47(2): 187-196.
[24] WEI X H. Research on visual detection of landing area and autonomous landing guidance of UAV[D]. Nanjing: Nanjing University of Aeronautics and Astronautics, 2019: 13-19 (in Chinese). 魏祥灰. 着陆区域视觉检测及无人机自主着陆导引研究[D]. 南京: 南京航空航天大学, 2019: 13-19.
[25] ZHANG S Y. Research on deep learning algorithm of object detection for intelligent robot[D]. Harbin: Harbin Engineering University, 2018: 1-2 (in Chinese). 张思雨. 智能机器人目标检测的深度学习算法研究[D]. 哈尔滨: 哈尔滨工程大学, 2018: 1-2.
[26] MINAEE S, BOYKOV Y, PORIKLI F, et al. Image segmentation using deep learning: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(7): 3523-3542.
[27] WANG Y. Vision-based UAV target recognition and tracking[D]. Harbin: Harbin Institute of Technology, 2019: 2-3 (in Chinese). 王瑶. 基于视觉的无人机目标识别及跟踪[D]. 哈尔滨: 哈尔滨工业大学, 2019: 2-3.
[28] JIN R. Research on object detection technology in autonomous recovery of UAV[D]. Beijing: Beijing Institute of Technology, 2020: 4-9 (in Chinese). 金忍. 无人机自主回收中的视觉检测技术研究[D]. 北京: 北京理工大学, 2020: 4-9.
[29] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507.
[30] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
[31] UIJLINGS J R R, SANDE K, GEVERS T, et al. Selective search for object recognition[J]. International Journal of Computer Vision, 2013, 104(2): 154-171.
[32] ZITNICK C L, DOLLáR P. Edge boxes: Locating object proposals from edges[C]//ECCV 2014. Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2014: 391-405.
[33] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2014: 580-587.
[34] GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 1440-1448.
[35] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[36] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 779-788.
[37] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 6517-6525.
[38] REDMON J, FARHADI A. YOLOv3: An incremental improvement[DB/OL]. arXiv preprint: 1804.02767, 2018.
[39] BOCHKOVSKIY A, WANG C-Y, LIAO H-Y M. Yolov4: Optimal speed and accuracy of object detection[DB/OL]. arXiv preprint: 2004.10934, 2020.
[40] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]//ECCV 2016. Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2016: 21-37.
[41] FU C Y, LIU W, RANGA A, et al. DSSD: Deconvolutional single shot detector[DB/OL]. arXiv preprint: 1701.06659, 2017
[42] ZHANG Z S, QIAO S Y, XIE C H, et al. Single-shot object detection with enriched semantics[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 5813-5821.
[43] NGUYEN P H, ARSALAN M, KOO J H, et al. LightDenseYOLO: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone[J]. Sensors (Basel, Switzerland), 2018, 18(6): 1703.
[44] LI J H, WANG X H, CUI H R, et al. Research on detection technology of autonomous landing based on airborne vision[J]. IOP Conference Series: Earth and Environmental Science, 2020, 440(4): 042093.
[45] TRUONG N Q, LEE Y W, OWAIS M, et al. SlimDeblurGAN-based motion deblurring and marker detection for autonomous drone landing[J]. Sensors (Basel, Switzerland), 2020, 20(14): 3918.
[46] TAN M X, PANG R M, LE Q V. EfficientDet: Scalable and efficient object detection[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2020: 10778-10787.
[47] CHOI J, CHUN D, KIM H, et al. Gaussian YOLOv3: An accurate and fast object detector using localization uncertainty for autonomous driving[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 502-511.
[48] ZHANG P Y, ZHONG Y X, LI X Q. SlimYOLOv3: Narrower, faster and better for real-time UAV applications[C]//2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Piscataway: IEEE Press, 2019: 37-45.
[49] ZHAO H P, ZHOU Y, ZHANG L, et al. Mixed YOLOv3-LITE: A lightweight real-time object detection method[J]. Sensors (Basel, Switzerland), 2020, 20(7): 1861.
[50] SHAKERNIA O, MA Y, KOO T J, et al. Vision guided landing of an unmanned air vehicle[C]//Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No. 99CH36304). Piscataway: IEEE Press, 1999: 4143-4148.
[51] SHARP C S, SHAKERNIA O, SASTRY S S. A vision system for landing an unmanned aerial vehicle[C]//Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164). Piscataway: IEEE Press, 2001: 1720-1727.
[52] SARIPALLI S, MONTGOMERY J F, SUKHATME G S. Vision-based autonomous landing of an unmanned aerial vehicle[C]//Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292). Piscataway: IEEE Press, 2002: 2799-2804.
[53] SARIPALLI S, MONTGOMERY J F, SUKHATME G S. Visually guided landing of an unmanned aerial vehicle[J]. IEEE Transactions on Robotics and Automation, 2003, 19(3): 371-380.
[54] ZENG F C, SHI H Q, WANG H. The object recognition and adaptive threshold selection in the vision system for landing an Unmanned Aerial Vehicle[C]//2009 International Conference on Information and Automation. Piscataway: IEEE Press, 2009: 117-122.
[55] LANGE S, SUNDERHAUF N, PROTZEL P. A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments[C]//2009 International Conference on Advanced Robotics. Piscataway: IEEE Press, 2009: 1-6.
[56] YUAN H W, XIAO C S, XIU S P, et al. A hierarchical vision-based UAV localization for an open landing[J]. Electronics, 2018, 7(5): 68.
[57] WUBBEN J, FABRA F, CALAFATE C T, et al. Accurate landing of unmanned aerial vehicles using ground pattern recognition[J]. Electronics, 2019, 8(12): 1532.
[58] JUNG Y, LEE D J, BANG H. Close-range vision navigation and guidance for rotary UAV autonomous landing[C]//2015 IEEE International Conference on Automation Science and Engineering. Piscataway: IEEE Press, 2015: 342-347.
[59] ZHANG M, ZHAO Y, BU S H, et al. Multi-level marker based autonomous landing system for UAVs[J]. Acta Aeronautica et Astronautica Sinica, 2018, 39(10): 322150 (in Chinese). 张咪, 赵勇, 布树辉, 等. 基于阶层标识的无人机自主精准降落系统[J]. 航空学报, 2018, 39(10): 322150.
[60] GARCÍA-PULIDO J A, PAJARES G, DORMIDO S, et al. Recognition of a landing platform for unmanned aerial vehicles by using computer vision-based techniques[J]. Expert Systems With Applications, 2017, 76: 152-165.
[61] MEI L C, WANG C Y, ZHAO Y F, et al. Real-time detection method of landmark in UAV autonomous landing[J]. Systems Engineering and Electronics, 2019, 41(10): 2157-2162 (in Chinese). 梅立春, 王彩云, 赵元富, 等. 无人机自主着陆地标实时检测方法[J]. 系统工程与电子技术, 2019, 41(10): 2157-2162.
[62] NGUYEN P H, KIM K W, LEE Y W, et al. Remote marker-based tracking for UAV landing using visible-light camera sensor[J]. Sensors (Basel, Switzerland), 2017, 17(9): 1987.
[63] TRUONG N Q, NGUYEN P H, NAM S H, et al. Deep learning-based super-resolution reconstruction and marker detection for drone landing[J]. IEEE Access, 2019, 7: 61639-61655.
[64] CHEN J J, MIAO X R, JIANG H, et al. Identification of autonomous landing sign for unmanned aerial vehicle based on faster regions with convolutional neural network[C]//2017 Chinese Automation Congress (CAC). Piscataway: IEEE Press, 2017: 2109-2114.
[65] YU L J, LUO C, YU X R, et al. Deep learning for vision-based micro aerial vehicle autonomous landing[J]. International Journal of Micro Air Vehicles, 2018, 10(2): 171-185.
[66] SARIPALLI S, SUKHATME G S. Landing on a moving target using an autonomous helicopter[C]//Proceedings of the Field and service robotics. Berlin: Springer, 2003: 277-286.
[67] CHENG H, CHEN Y S, LI X K, et al. Autonomous takeoff, tracking and landing of a UAV on a moving UGV using onboard monocular vision[C]//Proceedings of the 32nd Chinese Control Conference. Piscataway: IEEE Press, 2013: 5895-5901.
[68] CHEN X D, PHANG S K, SHAN M, et al. System integration of a vision-guided UAV for autonomous landing on moving platform[C]//2016 12th IEEE International Conference on Control and Automation. Piscataway: IEEE Press, 2016: 761-766.
[69] BACA T, STEPAN P, SPURNY V, et al. Autonomous landing on a moving vehicle with an unmanned aerial vehicle[J]. Journal of Field Robotics, 2019, 36(5): 874-891.
[70] LEE H, JUNG S, SHIM D H. Vision-based UAV landing on the moving vehicle[C]//2016 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2016: 1-7.
[71] BENINI A, RUTHERFORD M J, VALAVANIS K P. Real-time, GPU-based pose estimation of a UAV for autonomous takeoff and landing[C]//2016 IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2016: 3463-3470.
[72] ARAAR O, AOUF N, VITANOV I. Vision based autonomous landing of multirotor UAV on moving platform[J]. Journal of Intelligent & Robotic Systems, 2017, 85(2): 369-384.
[73] XING B Y, PAN F, WANG W, et al. Moving platform self-optimization landing technology for quadrotor based on hybrid landmark[J]. Acta Aeronautica et Astronautica Sinica, 2019, 40(6): 322601 (in Chinese). 邢伯阳, 潘峰, 王位, 等. 基于复合地标导航的动平台四旋翼飞行器自主优化降落技术[J]. 航空学报, 2019, 40(6): 322601.
[74] YANG T, REN Q, ZHANG F B, et al. Hybrid camera array-based UAV auto-landing on moving UGV in GPS-denied environment[J]. Remote Sensing, 2018, 10(11): 1829.
[75] QIU L W, SONG Z S, SHEN W Q. Computer vision scheme used for the automate landing of unmanned helicopter on ship deck[J]. Acta Aeronautica et Astronautica Sinica, 2003, 24(4): 351-354 (in Chinese). 邱力为, 宋子善, 沈为群. 用于无人直升机着舰控制的计算机视觉技术研究[J]. 航空学报, 2003, 24(4): 351-354.
[76] BAGEN W L, HU J Z, XU Y M. A vision-based unmanned helicopter ship board landing system[C]//2009 2nd International Congress on Image and Signal Processing. Piscataway: IEEE Press, 2009: 1-5.
[77] XU G L, ZHANG Y, JI S Y, et al. Research on computer vision-based for UAV autonomous landing on a ship[J]. Pattern Recognition Letters, 2009, 30(6): 600-605.
[78] XIA Z H, XU G L, CHENG Y H, et al. The study of IR segmentation of cooperative target in vision-based UAV landing on ship[J]. Aero Weaponry, 2009, 16(6): 28-30, 53 (in Chinese). 夏正浩, 徐贵力, 程月华, 等. 基于视觉的无人机着舰中红外合作目标的分割方法研究[J]. 航空兵器, 2009, 16(6): 28-30, 53.
[79] SANCHEZ-LOPEZ J L, PESTANA J, SARIPALLI S, et al. An approach toward visual autonomous ship board landing of a VTOL UAV[J]. Journal of Intelligent & Robotic Systems, 2014, 74(1-2): 113-127.
[80] POLVARA R, SHARMA S, WAN J, et al. Towards autonomous landing on a moving vessel through fiducial markers[C]//2017 European Conference on Mobile Robots (ECMR). Piscataway: IEEE Press, 2017: 1-6.
[81] POLVARA R, SHARMA S, WAN J, et al. Vision-based autonomous landing of a quadrotor on the perturbed deck of an unmanned surface vehicle[J]. Drones, 2018, 2(2): 15.
[82] TSAI A C, GIBBENS P W, STONE R H. Terminal phase vision-based target recognition and 3D pose estimation for a tail-sitter, vertical takeoff and landing unmanned air vehicle[C]//Advances in Image and Video Technology, Berlin: Springer, 2006: 672-681.
[83] YANG F, SHI H Q, WANG H. A vision-based algorithm for landing unmanned aerial vehicles[C]//2008 International Conference on Computer Science and Software Engineering. Piscataway: IEEE Press, 2008: 993-996.
[84] LI Y, WANG Y R, LUO H, et al. Landmark recognition for UAV autonomous landing based on vision[J]. Application Research of Computers, 2012, 29(7): 2780-2783 (in Chinese). 李宇, 王友仁, 罗慧, 等. 基于视觉的无人机自主着陆地标识别方法[J]. 计算机应用研究, 2012, 29(7): 2780-2783.
[85] VERBANDT M, THEYS B, DE SCHUTTER J. Robust marker-tracking system for vision-based autonomous landing of VTOL UAVs[C]//Proceedings of the International Micro Air Vehicle Conference and Competition 2014. Delft: Delft University of Technology, 2014: 84-91.
[86] XU C, QIU L K, LIU M, et al. Stereo vision based relative pose and motion estimation for unmanned helicopter landing[C]//2006 IEEE International Conference on Information Acquisition. Piscataway: IEEE Press, 2006: 31-36.
Outlines

/