Learning method for autonomous air combat based on experience transfer

  • ZHOU Kai ,
  • WEI Ruixuan ,
  • ZHANG Qirui ,
  • DING Chao
Expand
  • 1. Graduate College, Air Force Engineering University, Xi'an 710051, China;
    2. Aeronautics Engineering College, Air Force Engineering University, Xi'an 710038, China;
    3. Unit 95561 of PLA, Rikaze City 857000, China

Received date: 2020-05-15

  Revised date: 2020-05-30

  Online published: 2020-06-18

Supported by

Science and Technology Innovation 2030-Key Project of "New Generation Artificial Intelligence" (2018AAA0102403); National Natural Science Foundation of China (61573373)

Abstract

Most of the existing machine learning methods are in interactive learning mode, whose training process relies heavily on the interactive data with the environment. Air combat is a training mission with sparse rewards, with the system usually exploring for a long period of time to find actions that can obtain rewards during the beginning stage of learning. Retraining for every new mission wastes the computing resources. Therefore, a learning method based on experience transfer is designed in this paper, enabling the trained agent to share knowledge with the new agent and thereby improving its learning efficiency in the new task. First of all, a learning model based on experience transfer is constructed by referring to the phenomenon that mankind can learn rapidly through experiences. Secondly, considering both the knowledge sharing and characteristics of the new task, the connotation of experience is defined, and a cognitive mode of "knowledge + task → experience" is established. Thirdly, a reference learning method is designed, combining external experience with the task to further transform it into knowledge of the new agent. Finally, using experience applicability as the screening index, we analyze the influence of experience applicability on the reference learning efficiency, determining the screening boundary of implementing the reference learning. The new agent can therefore obtain preliminary knowledge about the new mission by reference learning and find action policies that can obtain reward so as to improve the learning speed in the new learning mission.

Cite this article

ZHOU Kai , WEI Ruixuan , ZHANG Qirui , DING Chao . Learning method for autonomous air combat based on experience transfer[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2020 , 41(S2) : 724285 -724285 . DOI: 10.7527/S1000-6893.2020.24285

References

[1] FINN C. Learning to learn with gradients[D]. Berkeley:University of California, Berkeley, 2018:1-20.
[2] PATRICIA N, CAPUTO B. Learning to learn, from transfer learning to domain adaptation:A unifying perspective[C]//Proceedings of the IEEE Conference on Computer Vision and Pat-tern Recognition. Piscataway:IEEE Press, 2014:1442-1449.
[3] HUANG J T, LI J, YU D, et al. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers[C]//Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway:IEEE Press, 2013:7304-7308.
[4] WANG L, TANG K, XIN B, et al. Knowledge transfer between multi-granularity models for reinforcement learning[C]//Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. Piscataway:IEEE Press, 2018:2881-2886.
[5] MARKOVA V D, SHOPOV V K. Knowledge transfer in reinforcement learning agent[C]//Proceedings of the IEEE Internation-al Conference on Information Technologies (In-foTech). Piscataway:IEEE Press, 2019:1-4.
[6] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-learning with memory-augmented neural networks[C]//Proceedings of the International Conference on Machine Learning. New York:ACM, 2016:1842-1850.
[7] XU Z, CAO L, CHEN X. Meta-Learning via weighted gradient update[J]. IEEE Access, 2019, 7:110846-110855.
[8] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. Cambridge:MIT Press, 2016:438-481.
[9] PAN S J, YANG Q. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10):1345-1359.
[10] TAN C, SUN F, KONG T, et al. A survey on deep transfer learning[C]//Proceedings of the International Conference on Artificial Neural Networks, 2018:270-279.
[11] TAYLOR M E, STONE P. Transfer learning for reinforcement learning domains:A survey[J]. Journal of Machine Learning Research, 2009, 10(7):1633-1685.
[12] SUTTON R S, BARTO A G. Reinforcement learning:An introduction[M]. Cambridge:The MIT Press, 2016:161-280.
[13] WEI R, ZHANG Q, XU Z. Peers' experience learning for developmental robots[J]. International Journal of Social Robotics, 2020, 12(1):35-45.
[14] 张启瑞. 运用认知发育机理的无人机防碰撞控制方法研究[D]. 西安:空军工程大学, 2019:51-78. ZHANG Q R. Research on anti-collision control method of UAV using cognitive development mechanism[D]. Xi'an:Air Force Engineering University, 2019:51-78(in Chinese).
[15] LI R, ZHAO Z, CHEN X, et al. TACT:A transfer actor-Critic learning framework for energy saving in cellular radio access networks[J]. IEEE Transactions on Wireless Communications, 2014, 13(4):2000-2011.
[16] KOUSHIK A M, HU F, KUMAR S. Intelligent spectrum management based on transfer actor-critic learning for rateless transmissions in cognitive radio networks[J]. IEEE Transactions on Mobile Computing, 2018, 17(5):1204-1215.
[17] ZHOU K, WEI R, ZHANG Q, et al. Learning system for air combat decision inspired by cognitive mechanisms of the nrain[J]. IEEE Access, 2020, 8:8129-8144.
[18] SILVER D, LEVER G, HEESS N, et al. Deterministic policy gradient algorithms[C]//Proceedings of the 31st International Conference on Machine Learning, 2014:387-395.
[19] WANG L, WANG M, YUE T. A fuzzy deterministic policy gradient algorithm for pursuit-evasion differential games[J]. Neurocomputing, 2019, 362:106-117.
[20] 刘冰雁,叶雄兵,周赤非,等. 基于改进DQN的复合模式在轨服务资源分配[J]. 航空学报, 2020, 41(5):323630. LIU B Y, YE X B, ZHOU C F, et al. Allocation of composite mode on-orbit service resource based on improved DQN[J]. Acta Aeronautica et Astronautica Sinica, 2020, 41(5):323630(in Chinese).
[21] SUN T, TSAI S, LEE Y, et al. The study on ontelligent advanced fighter air combat decision support system[C]//Proceedings of the IEEE International Conference on Information Reuse & Integration. Piscataway:IEEE Press, 2006:39-44.
Outlines

/