航空学报 > 2021, Vol. 42 Issue (4): 523810-523810   doi: 10.7527/S1000-6893.2020.23810

针对超临界翼型气动修型策略的强化学习

李润泽, 张宇飞, 陈海昕   

  1. 清华大学 航天航空学院, 北京 100084
  • 收稿日期:2020-01-08 修回日期:2020-02-01 发布日期:2020-02-21
  • 通讯作者: 陈海昕 E-mail:chenhaixin@tsinghua.edu.cn
  • 基金资助:
    国家自然科学基金(11872230,91852108);清华自主创新科研基金(2015Z22003)

Reinforcement learning method for supercritical airfoil aerodynamic design

LI Runze, ZHANG Yufei, CHEN Haixin   

  1. School of Aerospace Engineering, Tsinghua University, Beijing 100084, China
  • Received:2020-01-08 Revised:2020-02-01 Published:2020-02-21
  • Supported by:
    National Natural Science Foundation of China (11872230, 91852108); Innovation Program of Tsinghua University (2015Z22003)

摘要: 强化学习是一类用于学习策略的机器学习方法,通过模拟人的学习过程,与所处环境不断交互来学习动作策略,用以获得最大累积回报。以设计师在翼型气动设计中的增量修型过程为例,给出强化学习在气动优化设计中的要素定义和具体算法的实现。研究了预训练中选择不同示例对预训练和强化学习结果的影响,并将强化学习得到的策略模型在其他环境中进行了迁移测试验证。结果表明,合理的预训练能够有效提高强化学习的效率和最终策略的鲁棒性,且所形成的策略模型具有较好的迁移能力。

关键词: 强化学习, 增量修型, 近端策略优化(PPO), 预训练, 模仿学习, 迁移能力

Abstract: Reinforcement learning as a machine learning method for learning policies learns in a way similar to human learning process, interacting with the environment and learning how to achieve more rewards. The elements and algorithms of reinforcement learning are defined and adjusted in this paper for the supercritical airfoil aerodynamic design process. The results of imitation learning are then studied, and the policies from the imitation learning are adopted in reinforcement learning. The influence of different pretraining processes is studied, and the final policies tested in other similar environments. The results show that pretraining can improve reinforcement learning efficiency and policy robustness. The final policies obtained in this study can also have satisfactory performance in other similar environments.

Key words: reinforcement learning, incremental modification, Proximal Policy Optimization (PPO), pretraining, imitation learning, application transferability

中图分类号: