航空学报 > 2023, Vol. 44 Issue (19): 328420-328420   doi: 10.7527/S1000-6893.2023.28420

基于逆强化学习的航天器交会对接方法

岳承磊1,2, 汪雪川1,2(), 岳晓奎1,2, 宋婷3,4   

  1. 1.西北工业大学 航天飞行动力学技术国家级重点实验室,西安  710072
    2.西北工业大学 航天学院,西安  710072
    3.上海航天控制技术研究所,上海  201109
    4.上海市空间智能控制技术重点实验室,上海  201109
  • 收稿日期:2022-12-22 修回日期:2023-01-18 接受日期:2023-05-24 出版日期:2023-10-15 发布日期:2023-06-02
  • 通讯作者: 汪雪川 E-mail:xcwang@nwpu.edu.cn
  • 基金资助:
    国家自然科学基金(U2013206)

A spacecraft rendezvous and docking method based on inverse reinforcement learning

Chenglei YUE1,2, Xuechuan WANG1,2(), Xiaokui YUE1,2, Ting SONG3,4   

  1. 1.National Key Laboratory of Aerospace Flight Dynamics,Northwestern Polytechnical University,Xi’an  710072,China
    2.School of Astronautics,Northwestern Polytechnical University,Xi’an  710072,China
    3.Shanghai Aerospace Control Technology Institute,Shanghai  201109,China
    4.Shanghai Key Laboratory of Space Intelligent Control Technology,Shanghai  201109,China
  • Received:2022-12-22 Revised:2023-01-18 Accepted:2023-05-24 Online:2023-10-15 Published:2023-06-02
  • Contact: Xuechuan WANG E-mail:xcwang@nwpu.edu.cn
  • Supported by:
    National Natural Science Foundation of China(U2013206)

摘要:

针对使用神经网络解决追踪航天器接近静止目标问题,提出一种使用模型预测控制提供数据集,基于生成对抗逆强化学习训练神经网络的方法。首先在考虑追踪航天器最大速度约束,控制输入饱和约束和空间锥约束下,建立追踪航天器接近静止目标的动力学,并通过模型预测控制驱动航天器到达指定位置。其次为标称轨迹添加扰动,通过前述方法计算从各起始位置到目标点的轨迹,收集各轨迹各控制时刻的状态与控制信息,形成包含状态与对应控制的训练集。最后通过设置网络结构与参数和训练超参数,在训练集驱动下,采用生成对抗逆强化学习方法进行网络训练。仿真结果表明生成对抗逆强化学习可模仿专家轨迹行为,并成功训练神经网络,驱动航天器从起始点向目标位置运动。

关键词: 模型预测控制, 生成对抗逆强化学习, 模仿学习, 网络训练, 神经网络

Abstract:

For spacecraft proximity maneuvering and rendezvous, a method for training neural networks based on generative adversarial inverse reinforcement learning is proposed by using model predictive control to provide the expert dataset. Firstly, considering the maximum velocity constraint, the control input saturation constraint and the space cone constraint, the dynamics of the chaser spacecraft approaching a static target is established. Then, the chaser spacecraft is driven to reach the target using model predictive control. Secondly, disturbances are added to the nominal trajectory, and the trajectories from each starting positions to the target are calculated using the aforementioned method. The state and command of trajectories at each time are collected to form a training set. Finally, the network structure and parameters are set, and hyperparameters are trained. Driven by the training set, the adversarial inverse reinforcement learning method is used to train the network. The simulation results show that adversarial inverse reinforcement learning can imitate the behavior of expert trajectories, and successfully train the neural network to drive the spacecraft to move from the starting point to the static target.

Key words: model predictive control, generative adversarial inverse reinforcement learning, imitation learning, network training, neural network

中图分类号: