综述

未来智能空战发展综述

  • 孙智孝 ,
  • 杨晟琦 ,
  • 朴海音 ,
  • 白成超 ,
  • 葛俊
展开
  • 1. 航空工业沈阳飞机设计研究所, 沈阳 110035;
    2. 西北工业大学 电子信息学院, 西安 710072;
    3. 哈尔滨工业大学 航天学院, 哈尔滨 150001

收稿日期: 2021-04-15

  修回日期: 2021-05-08

  网络出版日期: 2021-06-18

A survey of air combat artificial intelligence

  • SUN Zhixiao ,
  • YANG Shengqi ,
  • PIAO Haiyin ,
  • BAI Chengchao ,
  • GE Jun
Expand
  • 1. AVIC Shenyang Aircraft Design and Research Institute, Shenyang 110035, China;
    2. School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China;
    3. School of Astronautics, Harbin Institute of Technology, Harbin 150001, China

Received date: 2021-04-15

  Revised date: 2021-05-08

  Online published: 2021-06-18

摘要

随着装备战斗力生成模式逐渐向机械化、信息化、智能化"三化融合"发展演变,未来航空主战装备的定位、形态及运用将可能发生根本性变革。为应对新时期空战任务所面临的环境高复杂性、博弈强对抗性、响应高实时性、信息不完整性、边界不确定性等一系列挑战,交叉融合人工智能理论与空战对抗技术,研发智能空战系统,将有望在下一代无人制空装备谱系中构建不对称"智能代差",成为制胜未来空天战场的核心关键。本文完整梳理了智能空战研究的发展脉络,总结了以专家机动逻辑、自动规则生成、规则演进、机器学习等方法为代表的智能空战基础理论。从体系、应用及技术视角全面剖析了智能空战的发展趋势,以智能空战的不确定性、安全性、解释性、迁移性、协同性为切入点阐述了智能空战应用落地的若干问题,以期为未来智能空战技术研究勾勒出一条新的探索路径,为人工智能理论与航空科学技术的跨领域交叉融合提供新的发展思路。

本文引用格式

孙智孝 , 杨晟琦 , 朴海音 , 白成超 , 葛俊 . 未来智能空战发展综述[J]. 航空学报, 2021 , 42(8) : 525799 -525799 . DOI: 10.7527/S1000-6893.2021.25799

Abstract

The development trend of future fighter jets has been evolving into mechanization, informationization and intelligence. Thus, essential revolution may be induced by the new application form of future fighter jets. Challenges of high complexity and opposability of the combat environment, real-time responses and incompleteness of the battle information are faced in such circumstances. The crossover of Artificial Intelligence (AI) and air combat might potentially dominate the next generation of air operation. This paper comprehensively sorts out the development history of modern air combat AI, and summarizes its theoretical foundations of represented by combat maneuvering logic, automatic confrontation-rule generation and evolution, and machine learning. The development trend of air combat AI is analyzed in terms of system, application, and technique. Several issues centered around the uncertainty, safety, interpretability, transfer ability, and coordination of air combat AI are elaborated. Moreover, an analysis of future air combat AI outlines a new path and provides inspiration for cross-field research of AI and aviation science and technology.

参考文献

[1] BURGIN G H, OWENS A J. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat:NASA CR-2582, CR-2583[R]. Washington, D.C.:NASA, 1975:Vol I and II.
[2] BURGIN G. Improvements to the adaptive maneuvering logic program:NASA CR 3985[R]. Washington, D.C.:NASA, 1986.
[3] GOODRICH K, MCMANUS J. Development of a tactical guidance research and evaluation system (TGRES)[C]//Flight Simulation Technologies Conference and Exhibit. Reston:AIAA, 1989
[4] MCMANUS J, GOODRICH K. Application of artificial intelligence (AI) programming techniques to tactical guidance for fighter aircraft[C]//Guidance, Navigation and Control Conference. Reston:AIAA, 1989
[5] GOODRICH K H. A high-fidelity, six-degree-of-freedom batch simulation environment for tactical guidance research and evaluation[EB/OL]. Hampton:National Aeronautics and Space Administration, 1993. (2010-05-06)[2021-03-10]. https://www.cs.odu.edu/~mln/ltrs-pdfs/tm4440.pdf.
[6] ERNEST N, CARROLL D. Genetic fuzzy based artificial intelligence for unmanned combat aerial vehicle control in simulated air combat missions[J]. Journal of Defense Management, 2016, 6(1):1-8
[7] SMITH R E, DIKE B A, MEHRA R K, et al. Classifier systems in combat:two-sided learning of maneuvers for advanced fighter aircraft[J]. Computer Methods in Applied Mechanics and Engineering, 2000, 186(2-4):421-437.
[8] CLIVE P D, JOHNSON J A, MOSS M J, et al. Advanced framework for simulation, integration and modeling (AFSIM) (Case Number:88ABW-2015-2258)[C]//Proceedings of the International Conference on Scientific Computing (CSC),2015.
[9] MCGREW J S. Real-time maneuvering decisions for autonomous air combat[D]. Cambridge:Massachusetts Institute of Technology, 2008:91-104.
[10] 薛羽, 庄毅, 张友益, 等. 基于启发式自适应离散差分进化算法的多UCAV协同干扰空战决策[J]. 航空学报, 2013, 34(2):343-351. XUE Y, ZHUANG Y, ZHANG Y Y, et al. Multiple UCAV cooperative jamming air combat decision making based on heuristic self-adaptive discrete differential evolution algorithm[J]. Acta Aeronautica et Astronautica Sinica, 2013, 34(2):343-351(in Chinese).
[11] ABBEEL P, COATES A, QUIGLEY M, et al. An application of reinforcement learning to aerobatic helicopter flight[C]//SCHÖLKOPF B, PLATT J, HOFMANN T. Advances in Neural Information Processing Systems 19:Proceedings of the 2006 Conference, 2007:1-8.
[12] Defense Advanced Research Projects Agency. AlphaDogfight trials go virtual for final event[EB/OL]. (2020-08-07)[2021-03-10]. https://www.darpa.mil/news-events/2020-08-07.
[13] THERESA H. DARPA's AlphaDogfight tests AI pilot's combat chops[EB/OL]. (2020-08-18)[2021-03-10]. https://breakingdefense.com/2020/08/darpas-alphadogfight-tests-ai-pilots-combat-chops/.
[14] Air Force Research Lab. Skyborg program seeks industry input for artificial intelligence initiative[EB/OL]. (2019-05-26)[2021-03-10]. https://afresearchlab.com/news/skyborg-program-seeks-industry-input-for-artificial-intelligence-initiative-2/.
[15] US Department of Defense. Summary of the 2018 Department of Defense artificial intelligence strategy[EB/OL]. (2018-11-08)[2021-03-10]. http://www.defense-aerospace.com/articles-view/reports/2/199929/pentagon-releases-artificial-intelligence-strategy.html.
[16] The Networking and Information Technology Research and Development (NITRD) Program. The national artificial intelligence research and development strategic plan:2019 update[EB/OL]. (2019-07-21)[2021-03-10]. https://www.nitrd.gov/news/National-AI-RD-Strategy-2019.aspx.
[17] MYERSON R B. Game theory[M]. London:Harvard University Press, 2013:1.
[18] XU G Y, LIU Q, ZHANG H. The application of situation function in differential game problem of the air combat[C]//2018 Chinese Automation Congress (CAC). Piscataway:IEEE Press, 2018:1190-1195.
[19] VIRTANEN K, KARELAHTI J, RAIVIO T. Modeling air combat by a moving horizon influence diagram game[J]. Journal of Guidance, Control, and Dynamics, 2006, 29(5):1080-1091.
[20] WEINTRAUB I E, PACHTER M, GARCIA E. An introduction to pursuit-evasion differential games[C]//2020 American Control Conference (ACC). Piscataway:IEEE Press, 2020:1049-1066.
[21] PARK H, LEE B Y, TAHK M J, et al. Differential game based air combat maneuver generation using scoring function matrix[J]. International Journal of Aeronautical and Space Sciences, 2016, 17(2):204-213.
[22] ALKAHER D, MOSHAIOV A. Dynamic-escape-zone to avoid energy-bleeding coasting missile[J]. Journal of Guidance, Control, and Dynamics, 2015, 38(10):1908-1921.
[23] ZHENG H Y, DENG Y, HU Y. Fuzzy evidential influence diagram and its evaluation algorithm[J]. Knowledge-Based Systems, 2017, 131:28-45.
[24] SHACHTER R D. Evaluating influence diagrams[J]. Operations Research, 1986, 34(6):871-882.
[25] KOLLER D, MILCH B. Multi-agent influence diagrams for representing and solving games[C]//Proceedings of the 17th International Joint Conference of Artificial Intelligence,2001:319-328.
[26] KOLLER D, MILCH B. Multi-agent influence diagrams for representing and solving games[J]. Games and Economic Behavior, 2003, 45(1):181-221.
[27] VIRTANEN K, RAIVIO T, HAMALAINEN R P. Modeling pilot's sequential maneuvering decisions by a multistage influence diagram[J]. Journal of Guidance, Control, and Dynamics, 2004, 27(4):665-677.
[28] PAN Q, ZHOU D Y, HUANG J C, et al. Maneuver decision for cooperative close-range air combat based on state predicted influence diagram[C]//2017 IEEE International Conference on Information and Automation (ICIA). Piscataway:IEEE Press, 2017:726-731.
[29] HUANG C Q, DONG K S, HUANG H Q, et al. Autonomous air combat maneuver decision using Bayesian inference and moving horizon optimization[J]. Journal of Systems Engineering and Electronics, 2018, 29(1):86-97.
[30] SPRINKLE J, EKLUND J M, KIM H J, et al. Encoding aerial pursuit/evasion games with fixed wing aircraft into a nonlinear model predictive tracking controller[C]//2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601). Piscataway:IEEE Press, 2004:2609-2614.
[31] 张菁, 何友, 彭应宁, 等. 基于神经网络和人工势场的协同博弈路径规划[J]. 航空学报, 2019, 40(3):322493. ZHANG J, HE Y, PENG Y N, et al. Neural network and artificial potential field based cooperative and adversarial path planning[J]. Acta Aeronautica et Astronautica Sinica, 2019, 40(3):322493(in Chinese).
[32] KANESHIGE J, KRISHNAKUMAR K. Artificial immune system approach for air combat maneuvering[C]//Proceeding of the SPIE, 2007.
[33] JI H M, YU M J, HAN Q S, et al. Research on the air combat countermeasure generation based on improved TIMS model[J]. Journal of Physics:Conference Series, 2018, 1069:012039.
[34] 国海峰, 侯满义, 张庆杰, 等. 基于统计学原理的无人作战飞机鲁棒机动决策[J]. 兵工学报, 2017, 38(1):160-167. GUO H F, HOU M Y, ZHANG Q J, et al. UCAV robust maneuver decision based on statistics principle[J]. Acta Armamentarii, 2017, 38(1):160-167(in Chinese).
[35] WANG Y, HUANG C Q, TANG C L. Research on unmanned combat aerial vehicle robust maneuvering decision under incomplete target information[J]. Advances in Mechanical Engineering, 2016, 8(10):168781401667438.
[36] YANG Q M, ZHANG J D, SHI G Q, et al. Maneuver decision of UAV in short-range air combat based on deep reinforcement learning[J]. IEEE Access, 2019, 8:363-378.
[37] 傅莉, 谢福怀, 孟光磊, 等. 基于滚动时域的无人机空战决策专家系统[J]. 北京航空航天大学学报, 2015, 41(11):1994-1999. FU L, XIE F H, MENG G L, et al. An UAV air-combat decision expert system based on receding horizon control[J]. Journal of Beijing University of Aeronautics and Astronautics, 2015, 41(11):1994-1999(in Chinese).
[38] 王锐平, 高正红. 无人机空战仿真中基于机动动作库的决策模型[J]. 飞行力学, 2009, 27(6):72-75, 79. WANG R P, GAO Z H. Research on decision system in air combat simulation using maneuver library[J]. Flight Dynamics, 2009, 27(6):72-75, 79(in Chinese).
[39] BURGIN G H, SIDOR L B. Rule-based air combat simulation[EB/OL]. La Jolla:National Aeronautics and Space Administration, 1988. (2009-06-05)[2021-03-10]. https://apps.dtic.mil/sti/pdfs/ADA257194.pdf.
[40] BERCHTOLD S, BÖHM C, KRIEGAL H P. The pyramid-technique:towards breaking the curse of dimensionality[C]//Proceedings of the 1998 ACM SIGMOD International Conference on Management of data-SIGMOD '98. New York:ACM Press, 1998:142-153.
[41] GENG W X, KONG F E, MA D Q. Study on tactical decision of UAV medium-range air combat[C]//The 26th Chinese Control and Decision Conference (2014 CCDC). Piscataway:IEEE Press, 2014:135-139.
[42] CUBUK E D, ZOPH B, MANE D, et al. AutoAugment:learning augmentation policies from data[DB/OL]. arXiv preprint:1805.09501, 2019.
[43] ALIKANIOTIS D, YANNAKOUDAKIS H, REI M. Automatic text scoring using neural networks[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers), 2016.
[44] JEONG C S, LEE J Y, JUNG K D. Adaptive recommendation system for tourism by personality type using deep learning[J]. International Journal of Internet, Broadcasting and Communication, 2020,12(1):55-60.
[45] BAIN M. A framework for behavioral cloning[J]. Machine Intelligence, 1996, 15:103-129.
[46] RODIN E Y, MASSOUD AMIN S. Maneuver prediction in air combat via artificial neural networks[J]. Computers & Mathematics With Applications, 1992, 24(3):95-112.
[47] SCHVANEVELDT R W, BENSON A E, GOLDSMLTH T E. Neural network models of air combat maneuvering:AD-A254653[R]. Texas:Williams Air Force Base, 1992.
[48] TENG T H, TAN A H, TAN Y S, et al. Self-organizing neural networks for learning air combat maneuvers[C]//The 2012 International Joint Conference on Neural Networks (IJCNN). Piscataway:IEEE Press, 2012:1-8.
[49] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587):484-489.
[50] VINYALS O, BABUSCHKIN I, CZARNECKI W M, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning[J]. Nature, 2019, 575(7782):350-354.
[51] BOTVINICK M, WANG J X, DABNEY W, et al. Deep reinforcement learning and its neuroscientific implications[J]. Neuron, 2020, 107(4):603-616.
[52] NGUYEN N D, NGUYEN T, NAHAVANDI S. System design perspective for human-level agents using deep reinforcement learning:a survey[J]. IEEE Access, 2017, 5:27091-27102.
[53] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540):529-533.
[54] 丁林静, 杨啟明. 基于强化学习的无人机空战机动决策[J]. 航空电子技术, 2018, 49(2):29-35. DING L J, YANG Q M. Research on air combat maneuver decision of UAVs based on reinforcement learning[J]. Avionics Technology, 2018, 49(2):29-35(in Chinese).
[55] LIU P, MA Y F. A deep reinforcement learning based intelligent decision method for UCAV air combat[M]//Communications in Computer and Information Science. Singapore:Springer Singapore, 2017:274-286.
[56] 左家亮, 杨任农, 张滢, 等. 基于启发式强化学习的空战机动智能决策[J]. 航空学报, 2017, 38(10):321168. ZUO J L, YANG R N, ZHANG Y, et al. Intelligent decision-making in air combat maneuvering based on heuristic reinforcement learning[J]. Acta Aeronautica et Astronautica Sinica, 2017, 38(10):321168(in Chinese).
[57] ZHANG X B, LIU G Q, YANG C J, et al. Research on air combat maneuver decision-making method based on reinforcement learning[J]. Electronics, 2018, 7(11):279.
[58] PIAO H. Beyond-visual-range air combat tactics auto-generation by reinforcement learning[C]//International Joint Conference on Neural Networks (IJCNN). Piscataway:IEEE Press, 2020.
[59] HEUILLET A, COUTHOUIS F, DÍAZ-RODRÍGUEZ N. Explainability in deep reinforcement learning[J]. Knowledge-Based Systems, 2021, 214:106685.
[60] MONTAVON G, SAMEK W, MVLLER K R. Methods for interpreting and understanding deep neural networks[J]. Digital Signal Processing, 2018, 73:1-15.
[61] YUAN W H, HANG K Y, KRAGIC D, et al. End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer[J]. Robotics and Autonomous Systems, 2019, 119:119-134.
[62] BROWN N, SANDHOLM T. Superhuman AI for multiplayer poker[J]. Science, 2019, 365(6456):885-890.
[63] HERNANDEZ-LEAL P, KARTAL B, TAYLOR M E. A survey and critique of multiagent deep reinforcement learning[J]. Autonomous Agents and Multi-Agent Systems, 2019, 33(6):750-797.
[64] SILVA F L D, COSTA A H R. A survey on transfer learning for multiagent reinforcement learning systems[J]. Journal of Artificial Intelligence Research, 2019, 64:645-703.
[65] BROWN N, SANDHOLM T. Solving imperfect-information games via discounted regret minimization[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33:1829-1836.
[66] SCHOFIELD M, THIELSCHER M. General game playing with imperfect information[J]. Journal of Artificial Intelligence Research, 2019, 66:901-935.
[67] DOSHI-VELEZ F, KIM B. Towards A rigorous science of interpretable machine learning[DB/OL]. arXiv preprint:1702.08608, 2017.
[68] LIPTON Z C. The mythos of model interpretability:In machine learning, the concept of interpretability is both important and slippery[J]. Queue, 2018,16(3):31-57.
[69] ZHANG X, SOLAR-LEZAMA A, SINGH R. Interpreting neural network judgments via minimal, stable, and symbolic corrections[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018:4879-4890.
[70] RUSU A A, VECERIK M, ROTHÖRL T, et al. Sim-to-real robot learning from pixels with progressive nets[DB/OL]. arXiv preprint:1610.04286, 2018.
[71] 孙长银, 穆朝絮. 多智能体深度强化学习的若干关键科学问题[J]. 自动化学报, 2020, 46(7):1301-1312. SUN C Y, MU C X. Important scientific problems of multi-agent deep reinforcement learning[J]. Acta Automatica Sinica, 2020, 46(7):1301-1312(in Chinese).
[72] LOWE R, WU Y, TAMAR A, et al. Multi-agent actor-critic for mixed cooperative-competitive environments[C]//Proceedings of the 31 st International Conference on Neural Information Processing Systems, 2017:6382-6393.
[73] 杨伟. 关于未来战斗机发展的若干讨论[J]. 航空学报, 2020, 41(6):524377. YANG W. Development of future fighters[J]. Acta Aeronautica et Astronautica Sinica, 2020, 41(6):524377(in Chinese).
文章导航

/