[1] United States Air Force. United States air force unmanned aircraft systems flight plan 2009-2047[S]. Washington, D.C.: United States Air Force, 2009.
[2] WARGO C A, CHURCH G C, GLANEUESKI J, et al. Unmanned Aircraft Systems (UAS) research and future analysis[C]//2014 IEEE Aerospace Conference. Piscataway: IEEE Press, 2014: 1-16.
[3] Radio Technical Commission for Aeronautics (RTCA). Operational and functional requirements and safety objectives (OFRSO) for unmanned aircraft systems (UAS) standards: DO-344[S]. Washington, D.C.: RTCA, 2013.
[4] 蔡志浩, 杨丽曼, 王英勋, 等. 无人机全空域飞行影响因素分析[J]. 北京航空航天大学学报, 2011, 37(2): 175-179, 184. CAI Z H, YANG L M, WANG Y X, et al. Analysis for whole airspace flight key factors of unmanned aerial vehicles[J]. Journal of Beijing University of Aeronautics and Astronautics, 2011, 37(2): 175-179, 184 (in Chinese).
[5] 王英勋, 蔡志浩. 无人机的自主飞行控制[J]. 航空制造技术, 2009, 52(8): 26-31. WANG Y X, CAI Z H. Autonomous flight control of unmanned aerial vehicle[J]. Aeronautical Manufacturing Technology, 2009, 52(8): 26-31 (in Chinese).
[6] PRATS X, DELGADO L, RAMÍREZ J, et al. Requirements, issues, and challenges for sense and avoid in unmanned aircraft systems[J]. Journal of Aircraft, 2012, 49(3): 677-687.
[7] YU X, ZHANG Y M. Sense and avoid technologies with applications to unmanned aircraft systems: Review and prospects[J]. Progress in Aerospace Sciences, 2015, 74: 152-166.
[8] MCFADYEN A, MEJIAS L. A survey of autonomous vision-based see and avoid for unmanned aircraft systems[J]. Progress in Aerospace Sciences, 2016, 80: 1-17.
[9] WANG J, LIU Y, SONG H. Counter-unmanned aircraft system (s)(C-UAS): State of the art, challenges, and future trends[J]. IEEE Aerospace and Electronic Systems Magazine, 2021, 36(3): 4-29.
[10] 吕洋, 康童娜, 潘泉, 等. 无人机感知与规避: 概念、技术与系统[J]. 中国科学: 信息科学, 2019, 49(5): 520-537. LYU Y, KANG T N, PAN Q, et al. UAV sense and avoidance: Concepts, technologies, and systems[J]. Scientia Sinica (Informationis), 2019, 49(5): 520-537 (in Chinese).
[11] 曹云峰, 张洲宇, 钟佩仪, 等. 入侵目标视觉检测与识别的研究进展[J]. 计算机测量与控制, 2019, 27(8): 7-11. CAO Y F, ZHANG Z Y, ZHONG P Y, et al. Review on vision based intruder detection and recognition[J]. Computer Measurement & Control, 2019, 27(8): 7-11 (in Chinese).
[12] 张进, 胡明华, 张晨. 空中交通管理中的复杂性研究[J]. 航空学报, 2009, 30(11): 2132-2142. ZHANG J, HU M H, ZHANG C. Complexity research in air traffic management[J]. Acta Aeronautica et Astronautica Sinica, 2009, 30(11): 2132-2142 (in Chinese).
[13] 宫淑丽. 民航飞机电子系统[M]. 北京: 科学出版社, 2015. GONG S L. Electronic system of aviation plane[M]. Beijing: Science Press, 2015 (in Chinese).
[14] ANGELOV P. Sense and avoid in UAS[M]. Washington, D.C.: Wiley, 2012.
[15] 中国民航网. 莫桑比克航空客机与无人机发生相撞鼻锥受损[EB/OL]. (2017-01-07)[2021-04-05]. http://www.caacnews.com.cn/1/88/201701/t20170107_1208006_wap.html. China Civil Aviation Network. LAM Mozambique ailine collides with UAV and nose cone is damaged[EB/OL]. (2017-01-07)[2021-04-05].http://www.caacnews.com.cn/1/88/201701/t20170107_1208006_wap.html (in Chinese).
[16] ADMINISTRATION F A. Integration of civil unmanned aircraft systems (UAS) in the national airspace system (NAS) roadmap (first edition)[R]. Washington, D.C.: FAA, 2013.
[17] European RPAS Steering Group. Roadmap for the integration of civil remotely-piloted aircraft systems into the European Aviation System[R]. 2013.
[18] ADMINISTRATION F A. Integration of civil unmanned aircraft systems (UAS) in the national airspace system (NAS) roadmap (second edition)[R]. Washington, D.C.: FAA, 2018.
[19] MCFADYEN A, CLOTHIER R, CAMPBELL D, et al. Scoping study for remotely piloted aircraft systems integration into civil airspace[R]. 2014.
[20] BILLINGSLEY T B, KOCHENDERFER M J, CHRYSSANTHACOPOULOS J P. Collision avoidance for general aviation[J]. IEEE Aerospace and Electronic Systems Magazine, 2012, 27(7): 4-12.
[21] VALOVAGE E. Enhanced ADS-B research[J]. IEEE Aerospace and Electronic Systems Magazine, 2007, 22(5): 35-38.
[22] PATIAS P. Introduction to unmanned aircraft systems[J]. Photogrammetric Engineering & Remote Sensing, 2016, 82(2): 89-92.
[23] ROSEN P A, HENSLEY S, WHEELER K, et al. UAVSAR: New NASA airborne SAR system for research[J]. IEEE Aerospace and Electronic Systems Magazine, 2007, 22(11): 21-28.
[24] KARHOFF B C, LIMB J I, ORAVSKY S W, et al. Eyes in the domestic sky: An assessment of sense and avoid technology for the army’s "warrior" unmanned aerial vehicle[C]//2006 IEEE Systems and Information Engineering Design Symposium. Piscataway: IEEE Press, 2006: 36-42.
[25] OSBORNE III R W, BAR-SHALOM Y, WILLETT P, et al. Design of an adaptive passive collision warning system for UAVs[C]//SPIE Optical Engineering + Applications. Proc SPIE 7445, Signal and Data Processing of Small Targets 2009, 2009, 7445: 333-345.
[26] MEJIAS L, MCFADYEN A, FORD J J. Sense and avoid technology developments at Queensland University of Technology[J]. IEEE Aerospace and Electronic Systems Magazine, 2016, 31(7): 28-37.
[27] LAI J, MEJIAS L, FORD J J. Airborne vision-based collision-detection system[J]. Journal of Field Robotics, 2011, 28(2): 137-157.
[28] LAI J, FORD J J, MEJIAS L, et al. Characterization of sky-region morphological-temporal airborne collision detection[J]. Journal of Field Robotics, 2013, 30(2): 171-193.
[29] MOLLOY T L, FORD J J, MEJIAS L. Detection of aircraft below the horizon for vision-based detect and avoid in unmanned aircraft systems[J]. Journal of Field Robotics, 2017, 34(7): 1378-1391.
[30] LAI J, FORD J J, O'SHEA P, et al. Vision-based estimation of airborne target pseudobearing rate using hidden Markov model filters[J]. IEEE Transactions on Aerospace and Electronic Systems, 2013, 49(4): 2129-2145.
[31] ZHANG Z Y, CAO Y F, DING M, et al. An intruder detection algorithm for vision based sense and avoid system[C]//2016 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2016: 550-556.
[32] ZHANG Z Y, CAO Y F, DING M, et al. Candidate regions extraction of intruder airplane under complex background for vision-based sense and avoid system[J]. IET Science, Measurement & Technology, 2017, 11(5): 571-580.
[33] CHEN R, GEVORKIAN A, FUNG A, et al. Multi-sensor data integration for autonomous sense and avoid[M]//Infotech@ Aerospace 2011. 2011: 1479.
[34] ZHANG Z Y, CAO Y F, ZHONG P Y, et al. An edge-boxes-based intruder detection algorithm for UAV sense and avoid system[J]. Transactions of Nanjing University of Aeronautics and Astronautics, 2019, 36(2): 253-263.
[35] LYU Y, PAN Q, ZHAO C H, et al. Autonomous stereo vision based collision avoid system for small UAV[C]//AIAA Information Systems-AIAA Infotech @ Aerospace. Reston: AIAA, 2017: 1150.
[36] LYU Y, PAN Q, ZHAO C H, et al. A UAV sense and avoid system design method based on software simulation[C]//2016 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2016: 572-579.
[37] LYU Y, PAN Q, ZHAO C H, et al. A vision based sense and avoid system for small unmanned helicopter[C]//2015 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2015: 586-592.
[38] CHO S, HUH S, SHIM D H, et al. Vision-based detection and tracking of airborne obstacles in a cluttered environment[J]. Journal of Intelligent & Robotic Systems, 2013, 69(1-4): 475-488.
[39] ZSEDROVITS T, BAUER P, HIBA A, et al. Performance analysis of camera rotation estimation algorithms in multi-sensor fusion for unmanned aircraft attitude estimation[J]. Journal of Intelligent & Robotic Systems, 2016, 84(1-4): 759-777.
[40] DING M, WEI L I. Sky-ground region segmentation and horizon detection using multi-scale dark channel images[J]. ICIC express letters. Part B, Applications: An International Journal of Research and Surveys, 2016, 7(2): 369-374.
[41] FASANO G, ACCADO D, MOCCIA A, et al. Sense and avoid for unmanned aircraft systems[J]. IEEE Aerospace and Electronic Systems Magazine, 2016, 31(11): 82-110.
[42] FASANO G, ACCARDO D, MOCCIA A, et al. Multi-sensor-based fully autonomous non-cooperative collision avoidance system for unmanned air vehicles[J]. Journal of Aerospace Computing, Information, and Communication, 2008, 5(10): 338-360.
[43] FASANO G, ACCARDO D, TIRRI A E, et al. Radar/electro-optical data fusion for non-cooperative UAS sense and avoid[J]. Aerospace Science and Technology, 2015, 46: 436-450.
[44] HOUGH V,PAUL C. Method and means for recognizing complex patterns: U.S. Patent 3,069,654[P]. 1962-12-18.
[45] DUDA R O, HART P E. Use of the Hough transformation to detect lines and curves in pictures[J]. Communications of the ACM, 1972, 15(1): 11-15.
[46] LIN Y C, PINTEA S L, VAN GEMERT J C. Deep Hough-transform line priors[M]//Computer Vision-ECCV 2020. Cham: Springer International Publishing, 2020: 323-340.
[47] LU X H, YAO J, LI K, et al. CannyLines: A parameter-free line segment detector[C]//2015 IEEE International Conference on Image Processing. Piscataway: IEEE Press, 2015: 507-511.
[48] SANTOS T, MOREIRA M, ALMEIDA J, et al. PLineD: Vision-based power lines detection for unmanned aerial vehicles[C]//2017 IEEE International Conference on Autonomous Robot Systems and Competitions. Piscataway: IEEE Press, 2017: 253-259.
[49] VON GIOI R G, JAKUBOWICZ J, MOREL J M, et al. LSD: A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4): 722-732.
[50] OSSIMITZ C, TAHERINEJAD N. A fast line segment detector using approximate computing[C]//2021 IEEE International Symposium on Circuits and Systems. Piscataway: IEEE Press, 2021: 1-5.
[51] JIANG X Y, MA J Y, XIAO G B, et al. A review of multimodal image matching: Methods and applications[J]. Information Fusion, 2021, 73: 22-71.
[52] ATARITA F. Hyperspectral imaging simulator and applications for unmanned aerial vehicles[D]. Kingston: Queen’s University, 2021.
[53] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
[54] JHAN J P, RAU J Y. A generalized tool for accurate and efficient image registration of UAV multi-lens multispectral cameras by N-SURF matching[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 6353-6362.
[55] CUCKA P, ROSENFELD A. Linear feature compatibility for pattern-matching relaxation[J]. Pattern Recognition, 1992, 25(2): 189-196.
[56] BENTOUTOU Y, TALEB N, KPALMA K, et al. An automatic image registration for applications in remote sensing[J]. IEEE Transactions on Geoscience and Remote Sensing, 2005, 43(9): 2127-2137.
[57] HAN X F, LEUNG T, JIA Y Q, et al. MatchNet: Unifying feature and metric learning for patch-based matching[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2015: 3279-3286.
[58] ZAGORUYKO S, KOMODAKIS N. Deep compare: A study on using convolutional neural networks to compare image patches[J]. Computer Vision and Image Understanding, 2017, 164: 38-55.
[59] MA J Y, JIANG X Y, FAN A X, et al. Image matching from handcrafted to deep features: A survey[J]. International Journal of Computer Vision, 2021, 129(1): 23-79.
[60] DING M, WEI L, WANG B F. Research on fusion method for infrared and visible images via compressive sensing[J]. Infrared Physics & Technology, 2013, 57: 56-67.
[61] PAJARES G, MANUEL DE LA CRUZ J. A wavelet-based image fusion tutorial[J]. Pattern Recognition, 2004, 37(9): 1855-1872.
[62] DU J, LI W S, XIAO B, et al. Union Laplacian pyramid with multiple features for medical image fusion[J]. Neurocomputing, 2016, 194: 326-339.
[63] YADAV S P, YADAV S. Image fusion using hybrid methods in multimodality medical images[J]. Medical & Biological Engineering & Computing, 2020, 58(4): 669-687.
[64] ZHANG Q, LIU Y, BLUM R S, et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review[J]. Information Fusion, 2018, 40: 57-75.
[65] JIN X, JIANG Q, YAO S W, et al. A survey of infrared and visual image fusion methods[J]. Infrared Physics & Technology, 2017, 85: 478-501.
[66] ZHU Z Q, YIN H P, CHAI Y, et al. A novel multi-modality image fusion method based on image decomposition and sparse representation[J]. Information Sciences, 2018, 432: 516-529.
[67] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102.
[68] LIU Y, CHEN X, WANG Z F, et al. Deep learning for pixel-level image fusion: Recent advances and future prospects[J]. Information Fusion, 2018, 42: 158-173.
[69] MA J Y, LIANG P W, YU W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98.
[70] MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.
[71] LI H, WU X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.
[72] LI H, WU X J, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]//2018 24th International Conference on Pattern Recognition (ICPR). Piscataway: IEEE Press, 2018: 2705-2710.
[73] YE Q, LI L G, TAN L, et al. Image fusion based on convolution sparse representation and pulse coupled neural network in non-subsampled contourlet domain[J]. International Journal of Embedded Systems, 2020, 12(1): 102-104.
[74] TAO J, CAO Y F, DING M, et al. Visible and infrared image fusion for space debris recognition with convolutional sparse representaiton[C]//2018 IEEE CSAA Guidance, Navigation and Control Conference. Piscataway: IEEE Press, 2018: 1-5.
[75] SIVARAMAN S, TRIVEDI M M. Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis[J]. IEEE Transactions on Intelligent Transportation Systems, 2013, 14(4): 1773-1795.
[76] MUKHTAR A, XIA L K, TANG T B. Vehicle detection techniques for collision avoidance systems: A review[J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(5): 2318-2338.
[77] KARASEV V, AYVACI A, HEISELE B, et al. Intent-aware long-term prediction of pedestrian motion[C]//2016 IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2016: 2543-2549.
[78] KELLER C G, GAVRILA D M. Will the pedestrian cross? A study on pedestrian path prediction[J]. IEEE Transactions on Intelligent Transportation Systems, 2014, 15(2): 494-506.
[79] DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2005: 886-893.
[80] CLARK M, KERN Z, PRAZENICA R J. A vision-based proportional navigation guidance law for UAS sense and avoid[C]//AIAA Guidance, Navigation, and Control Conference. Reston: AIAA, 2015: 0074.
[81] VANEK B, PENI T, BOKOR J, et al. Performance analysis of a vision only sense and avoid system for small UAVs[C]//AIAA Guidance, Navigation, and Control Conference. Reston: AIAA, 2011: 6602.
[82] ROZANTSEV A, LEPETIT V, FUA P. Detecting flying objects using a single moving camera[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(5): 879-892.
[83] OKSUZ K, CAM B C, KALKAN S, et al. Imbalance problems in object detection: A review[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(10): 3388-3415.
[84] PADILLA R, NETTO S L, DA SILVA E A B. A survey on performance metrics for object-detection algorithms[C]//2020 International Conference on Systems, Signals and Image Processing (IWSSIP). Piscataway: IEEE Press, 2020: 237-242.
[85] ZITNICK C L, DOLLÁR P. Edge boxes: Iocating object proposals from edges[C]//Computer Vision-ECCV 2014, 2014.
[86] UIJLINGS J R R, SANDE K, GEVERS T, et al. Selective search for object recognition[J]. International Journal of Computer Vision, 2013, 104(2): 154-171.
[87] AHONEN T, HADID A, PIETIKÄINEN M. Face description with local binary patterns: Application to face recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(12): 2037-2041.
[88] ANBARASU B, ANITHA G. Indoor scene recognition for micro aerial vehicles navigation using enhanced SIFT-ScSPM descriptors[J]. Journal of Navigation, 2020, 73(1): 37-55.
[89] CAO X B, WU C X, YAN P K, et al. Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos[C]//2011 18th IEEE International Conference on Image Processing. Piscataway: IEEE Press, 2011: 2421-2424.
[90] DHILLON A, VERMA G K. Convolutional neural network: A review of models, methodologies and applications to object detection[J]. Progress in Artificial Intelligence, 2020, 9(2): 85-112.
[91] TONG K, WU Y Q, ZHOU F. Recent advances in small object detection based on deep learning: A review[J]. Image and Vision Computing, 2020, 97: 103910.
[92] SHARMA V, MIR R N. A comprehensive and systematic look up into deep learning based object detection techniques: A review[J]. Computer Science Review, 2020, 38: 100301.
[93] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 779-788.
[94] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 6517-6525.
[95] REDMON J, FARHADI A. YOLOv3: An incremental improvement[EB/OL].arXiv Preprint: 1804.02767,2018.
[96] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection[DB/OL]. arXiv preprint: 2004.10934, 2020.
[97] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2014: 580-587.
[98] GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 1440-1448.
[99] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[100] GARCIA-GARCIA B, BOUWMANS T, ROSALES SILVA A J. Background subtraction in real applications: Challenges, current models and future directions[J]. Computer Science Review, 2020, 35: 100204.
[101] AGRAWAL S, NATU P. Segmentation of moving objects using numerous background subtraction methods for surveillance applications[J]. International Journal of Innovative Technology and Exploring Engineering, 2020, 9(3): 2553-2563.
[102] ANGADI S, NANDYAL S. A review on object detection and tracking in video surveillance[J]. International Journal of Advanced Research in Engineering and Technology, 2020, 11(9):1033-1042.
[103] TOM A J, GEORGE S N. Simultaneous reconstruction and moving object detection from compressive sampled surveillance videos[J]. IEEE Transactions on Image Processing, 2020, 29: 7590-7602.
[104] WANG Y L, WEI H C, DING X Y, et al. Video background/foreground separation model based on non-convex rank approximation RPCA and superpixel motion detection[J]. IEEE Access, 2020, 8: 157493-157503.
[105] ZHANG Z Y, CAO Y F, DING M, et al. Spatial and temporal context information fusion based flying objects detection for autonomous sense and avoid[C]//2018 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2018: 569-578.
[106] LIU P P, LYU M, KING I, et al. SelFlow: Self-supervised learning of optical flow[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2019: 4566-4575.
[107] DE CROON G C H E, DE WAGTER C, SEIDL T. Enhancing optical-flow-based control by learning visual appearance cues for flying robots[J]. Nature Machine Intelligence, 2021, 3(1): 33-41.
[108] LIAO B, HU J L, GILMORE R O. Optical flow estimation combining with illumination adjustment and edge refinement in livestock UAV videos[J]. Computers and Electronics in Agriculture, 2021, 180: 105910.
[109] ZHANG Z Y, CAO Y F, DING M, et al. Monocular vision based obstacle avoidance trajectory planning for unmanned aerial vehicle[J]. Aerospace Science and Technology, 2020, 106: 106199.
[110] KNYAZ V A, KNIAZ V V, REMONDINO F, et al. 3D reconstruction of a complex grid structure combining UAS images and deep learning[J]. Remote Sensing, 2020, 12(19): 3128.
[111] ZHENG T X, HUANG S, LI Y F, et al. Key techniques for vision based 3D reconstruction: A review[J]. Acta Automatica Sinica, 2020, 46(4): 631-652.
[112] INGALE A K, DIVYA U J. Real-time 3D reconstruction techniques applied in dynamic scenes: A systematic literature review[J]. Computer Science Review, 2021, 39: 100338.
[113] WU F P, ZHU S K, YE W L. A single image 3D reconstruction method based on a novel monocular vision system[J]. Sensors, 2020, 20(24): 7045.
[114] FU K, PENG J S, HE Q W, et al. Single image 3D object reconstruction based on deep learning: A review[J]. Multimedia Tools and Applications, 2021, 80(1): 463-498.
[115] SAPUTRA M R U, MARKHAM A, TRIGONI N. Visual SLAM and structure from motion in dynamic environments[J]. ACM Computing Surveys, 2019, 51(2): 1-36.
[116] PASQUALETTO CASSINIS L, FONOD R, GILL E. Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft[J]. Progress in Aerospace Sciences, 2019, 110: 100548.
[117] LU X X. A review of solutions for perspective-n-point problem in camera pose estimation[J]. Journal of Physics: Conference Series, 2018, 1087: 052009.
[118] KIM P, LEE H, KIM H J. Autonomous flight with robust visual odometry under dynamic lighting conditions[J]. Autonomous Robots, 2019, 43(6): 1605-1622.
[119] KUO X Y, LIU C E, LIN K C, et al. Dynamic attention-based visual odometry[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE Press, 2021: 5753-5760.
[120] PARAMESHWARA C M, SANKET N J, SINGH C D, et al. 0-MMS: Zero-shot multi-motion segmentation with A monocular event camera[C]//2021 IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2021: 9594-9600.
[121] LAGA H, JOSPIN L V, BOUSSAID F, et al. A survey on deep learning techniques for stereo-based depth estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 1738-1764.
[122] MARR D, POGGIO T. A computational theory of human stereo vision[J]. Proceedings of the Royal Society of London Series B, Biological Sciences, 1979, 204(1156): 301-328.
[123] ZOU X J, ZOU H X, LU J. Virtual manipulator-based binocular stereo vision positioning system and errors modelling[J]. Machine Vision and Applications, 2012, 23(1): 43-63.
[124] 李占贤, 许哲. 双目视觉的成像模型分析[J]. 机械工程与自动化, 2014(4): 191-192. LI Z X, XU Z. Analysis of imaging model of binocular vision[J]. Mechanical Engineering & Automation, 2014(4): 191-192 (in Chinese).
[125] WANG Q, MENG Z J, LIU H. Review on application of binocular vision technology in field obstacle detection[J]. IOP Conference Series: Materials Science and Engineering, 2020, 806(1): 012025.
[126] FAN X J, GUO Y J, LIU H, et al. Improved artificial potential field method applied for AUV path planning[J]. Mathematical Problems in Engineering, 2020, 2020: 6523158.
[127] PARK S O, LEE M C, KIM J. Trajectory planning with collision avoidance for redundant robots using Jacobian and artificial potential field-based real-time inverse kinematics[J]. International Journal of Control, Automation and Systems, 2020, 18(8): 2095-2107.
[128] WANG D Y, WANG P, ZHANG X T, et al. An obstacle avoidance strategy for the wave glider based on the improved artificial potential field and collision prediction model[J]. Ocean Engineering, 2020, 206: 107356.
[129] LAYMAN T, FIELDS T, YAKIMENKO O A. Evaluation of proportional navigation for multirotor pursuit[C]//AIAA Scitech 2021 Forum. Reston: AIAA, 2021: 1813.
[130] BAUER P, HIBA A, BOKOR J, et al. Three dimensional intruder closest point of approach estimation based-on monocular image parameters in aircraft sense and avoid[J]. Journal of Intelligent & Robotic Systems, 2019, 93(1-2): 261-276.
[131] BAUER P, HIBA A, BOKOR J. Monocular image-based intruder direction estimation at closest point of approach[C]//2017 International Conference on Unmanned Aircraft Systems (ICUAS). Piscataway: IEEE Press, 2017: 1108-1117.
[132] TAN C Y, HUANG S N, TAN K K, et al. Collision avoidance design on unmanned aerial vehicle in 3D space[J]. Unmanned Systems, 2018, 6(4): 277-295.
[133] LEVINE S. Reinforcement learning and control as probabilistic inference: Tutorial and review[DB/OL]. ArXiv preprint: 1805.00909, 2018.
[134] KIRAN B R, SOBH I, TALPAERT V, et al. Deep reinforcement learning for autonomous driving: A survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2021: 1-18.
[135] VAN DEN BERG J, LIN M, MANOCHA D. Reciprocal Velocity Obstacles for real-time multi-agent navigation[C]//2008 IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2008: 1928-1935.
[136] CHEN Y F, LIU M, EVERETT M, et al. Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning[C]//2017 IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2017: 285-292.