| [1] |
樊会涛, 闫俊. 空战体系的演变及发展趋势[J]. 航空学报, 2022, 43(10): 527397.
|
|
FAN H T, YAN J. Evolution and development trend of air combat system[J]. Acta Aeronautica et Astronautica Sinica, 2022, 43(10): 527397 (in Chinese).
|
| [2] |
孙智孝, 杨晟琦, 朴海音, 等. 未来智能空战发展综述[J]. 航空学报, 2021, 42(8): 525799.
|
|
SUN Z X, YANG S Q, PIAO H Y, et al. A survey of air combat artificial intelligence[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(8): 525799 (in Chinese).
|
| [3] |
DEMAY C R, WHITE E L, DUNHAM W D, et al. AlphaDogfight trials: Bringing autonomy to air combat[J]. Johns Hopkins APL Technical Digest, 2022, 36(2): 154-163.
|
| [4] |
POPE A P, IDE J S, MIĆOVIĆ D, et al. Hierarchical reinforcement learning for air combat at DARPA’s AlphaDogfight trials[J]. IEEE Transactions on Artificial Intelligence, 2023, 4(6): 1371-1385.
|
| [5] |
周攀, 黄江涛, 章胜, 等. 基于深度强化学习的智能空战决策与仿真[J]. 航空学报, 2023, 44(4): 126731.
|
|
ZHOU P, HUANG J T, ZHANG S, et al. Intelligent air combat decision making and simulation based on deep reinforcement learning[J]. Acta Aeronautica et Astronautica Sinica, 2023, 44(4): 126731 (in Chinese).
|
| [6] |
周攀, 李霓, 黄江涛, 等. 非完备信息下无人机近距博弈自主决策[J].航空学报, 2025, 46(S1): 732215.
|
|
ZHOU P, LI N, HUANG J T, et al. Autonomous decision-making in close-range game under imperfect information for unmanned aerial vehicles [J]. Acta Aeronautica et Astronautica Sinica, 2025, 46(S1): 732215 (in Chinese).
|
| [7] |
WANG D H, ZHANG J D, YANG Q M, et al. An autonomous attack decision-making method based on hierarchical virtual Bayesian reinforcement learning[J]. IEEE Transactions on Aerospace and Electronic Systems, 2024, 60(5): 7075-7088.
|
| [8] |
DE MARCO A, D’ONZA P M, MANFREDI S. A deep reinforcement learning control approach for high-performance aircraft[J]. Nonlinear Dynamics, 2023, 111(18): 17037-17077.
|
| [9] |
SALDIRAN E, HASANZADE M, INALHAN G, et al. Explainability of AI-driven air combat agent[C]∥2023 IEEE Conference on Artificial Intelligence (CAI). Piscataway: IEEE Press, 2023: 85-86.
|
| [10] |
杨书恒, 张栋, 熊威, 等. 基于可解释性强化学习的空战机动决策方法[J]. 航空学报, 2024, 45(18): 329922.
|
|
YANG S H, ZHANG D, XIONG W, et al. Decision-making method for air combat maneuver based on explainable reinforcement learning[J]. Acta Aeronautica et Astronautica Sinica, 2024, 45(18): 329922 (in Chinese).
|
| [11] |
SELMONAJ A, SZEHR O, DEL RIO G, et al. Hierarchical multi-agent reinforcement learning for air combat maneuvering[C]∥2023 International Conference on Machine Learning and Applications (ICMLA). Piscataway: IEEE Press, 2023: 1031-1038.
|
| [12] |
李文韬, 方峰, 王振亚, 等. 引入混合超网络改进MADDPG的双机编队空战自主机动决策[J]. 航空学报, 2024, 45(17): 529460.
|
|
LI W T, FANG F, WANG Z Y, et al. Intelligent maneuvering decision-making in two-UCAV cooperative air combat based on improved MADDPG with hybrid hyper network[J]. Acta Aeronautica et Astronautica Sinica, 2024, 45(17): 529460 (in Chinese).
|
| [13] |
XU X J, WANG Y F, GUO X, et al. Multi-UAV air combat cooperative game based on virtual opponent and value attention decomposition policy gradient[J]. Expert Systems with Applications, 2025, 267: 126069.
|
| [14] |
ZHOU Y M, YANG F, ZHANG C Y, et al. Cooperative decision-making algorithm with efficient convergence for UCAV formation in beyond-visual-range air combat based on multi-agent reinforcement learning[J]. Chinese Journal of Aeronautics, 2024, 37(8): 311-328.
|
| [15] |
YAN Z H, LIANG X L, HOU Y Q, et al. A sample selection mechanism for multi-UCAV air combat policy training using multi-agent reinforcement learning[J]. Chinese Journal of Aeronautics, 2025, 38(6): 103391.
|
| [16] |
JIANG F L, XU M Q, LI Y Q, et al. Short-range air combat maneuver decision of UAV swarm based on multi-agent Transformer introducing virtual objects[J]. Engineering Applications of Artificial Intelligence, 2023, 123: 106358.
|
| [17] |
WU J H, ZHANG N, LI D Y, et al. A context-aware feature fusion method for multi-UAV cooperative air combat[J]. IEEE Transactions on Intelligent Transportation Systems, 2025, 26(5): 7197-7210.
|
| [18] |
BERNDT J. JSBSim: An open source flight dynamics model in C++[C]∥AIAA Modeling and Simulation Technologies Conference and Exhibit. Reston: AIAA, 2004.
|
| [19] |
YU C, VELU A, VINITSKY E, et al. The surprising effectiveness of PPO in cooperative multi-agent games[J]. Advances in Neural Information Processing Systems, 2022, 35: 24611-24624.
|
| [20] |
李霓, 廉云霄, 周攀, 等. 面向智能空战的深度强化学习技术综述[J]. 航空工程进展, 2025, 16(3): 1-16.
|
|
LI N, LIAN Y X, ZHOU P, et al. A survey of deep reinforcement learning technologies for intelligent air combat[J]. Advances in Aeronautical Science and Engineering, 2025, 16(3): 1-16 (in Chinese).
|
| [21] |
VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[DB/OL]. arXiv preprint: 1710.10903, 2017.
|
| [22] |
DEY R, SALEM F M. Gate-variants of gated recurrent unit (GRU) neural networks[C]∥2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). Piscataway: IEEE Press, 2017: 1597-1600.
|
| [23] |
ZHANG R Z, XU Z L, MA C D, et al. A survey on self-play methods in reinforcement learning[DB/OL]. arXiv preprint: 2408.01072, 2024.
|
| [24] |
PANG J H, HE J L, MOHAMED N, et al. A hierarchical reinforcement learning framework for multi-UAV combat using leader-follower strategy[DB/OL]. arXiv preprint: 2501.13132, 2025.
|