Electronics and Electrical Engineering and Control

A spacecraft rendezvous and docking method based on inverse reinforcement learning

  • Chenglei YUE ,
  • Xuechuan WANG ,
  • Xiaokui YUE ,
  • Ting SONG
Expand
  • 1.National Key Laboratory of Aerospace Flight Dynamics,Northwestern Polytechnical University,Xi’an  710072,China
    2.School of Astronautics,Northwestern Polytechnical University,Xi’an  710072,China
    3.Shanghai Aerospace Control Technology Institute,Shanghai  201109,China
    4.Shanghai Key Laboratory of Space Intelligent Control Technology,Shanghai  201109,China
E-mail: xcwang@nwpu.edu.cn

Received date: 2022-12-22

  Revised date: 2023-01-18

  Accepted date: 2023-05-24

  Online published: 2023-06-02

Supported by

National Natural Science Foundation of China(U2013206)

Abstract

For spacecraft proximity maneuvering and rendezvous, a method for training neural networks based on generative adversarial inverse reinforcement learning is proposed by using model predictive control to provide the expert dataset. Firstly, considering the maximum velocity constraint, the control input saturation constraint and the space cone constraint, the dynamics of the chaser spacecraft approaching a static target is established. Then, the chaser spacecraft is driven to reach the target using model predictive control. Secondly, disturbances are added to the nominal trajectory, and the trajectories from each starting positions to the target are calculated using the aforementioned method. The state and command of trajectories at each time are collected to form a training set. Finally, the network structure and parameters are set, and hyperparameters are trained. Driven by the training set, the adversarial inverse reinforcement learning method is used to train the network. The simulation results show that adversarial inverse reinforcement learning can imitate the behavior of expert trajectories, and successfully train the neural network to drive the spacecraft to move from the starting point to the static target.

Cite this article

Chenglei YUE , Xuechuan WANG , Xiaokui YUE , Ting SONG . A spacecraft rendezvous and docking method based on inverse reinforcement learning[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2023 , 44(19) : 328420 -328420 . DOI: 10.7527/S1000-6893.2023.28420

References

1 林来兴. 空间碎片现状与清理[J]. 航天器工程201221(3): 1-10.
  LIN L X. Status and removal of space debris[J]. Spacecraft Engineering201221(3): 1-10 (in Chinese).
2 孟云鹤. 近地轨道航天器编队飞行控制与应用研究[D]. 长沙: 国防科学技术大学, 2006: 1-6.
  MENG Y H. Research on control and application of LEO spacecraft formation flying[D]. Changsha: National University of Defense Technology, 2006 : 1-6 (in Chinese).
3 赵力冉, 党朝辉, 张育林. 空间轨道博弈: 概念、原理与方法[J]. 指挥与控制学报20217(3): 215-224.
  ZHAO L R, DANG Z H, ZHANG Y L. Orbital game: Concepts, principles and methods[J]. Journal of Command and Control20217(3): 215-224 (in Chinese).
4 LI Q, YUAN J P, ZHANG B, et al. Model predictive control for autonomous rendezvous and docking with a tumbling target[J]. Aerospace Science and Technology201769: 700-711.
5 MAMMARELLA M, CAPELLO E, PARK H, et al. Tube-based robust model predictive control for spacecraft proximity operations in the presence of persistent disturbance[J]. Aerospace Science and Technology201877: 585-594.
6 LI P, ZHU Z H. Line-of-sight nonlinear model predictive control for autonomous rendezvous in elliptical orbit[J]. Aerospace Science and Technology201769: 236-243.
7 李成录. 大数据背景下机器学习算法的综述[J]. 信息记录材料201819(5): 4-5.
  LI C L. Under the background of big data review of machine learning algorithms[J]. Information Recording Materials201819(5): 4-5 (in Chinese).
8 龙慧, 朱定局, 田娟. 深度学习在智能机器人中的应用研究综述[J]. 计算机科学201845(S2): 43-47, 52.
  LONG H, ZHU D J, TIAN J. Research on deep learning used in intelligent robots[J]. Computer Science201845(S2): 43-47, 52 (in Chinese).
9 吴今培. 智能故障诊断技术的发展和展望[J]. 振动 测试与诊断199919(2): 79-86.
  WU J P. Development and prospect of intelligent fault diagnosis[J]. Journal of Vibration, Measurement & Diagnosis, 199919(2): 79-86. (in Chinese)
10 HUA J A, ZENG L C, LI G F, et al. Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning[J]. Sensors202121(4): 1278.
11 FANG B, JIA S D, GUO D, et al. Survey of imitation learning for robotic manipulation[J]. International Journal of Intelligent Robotics and Applications20193(4): 362-369.
12 NG A Y, RUSSELL S J. Algorithms for inverse reinforcement learning[C]∥International Conference on Machine Learning. San Franciso: Morgan Kaufmann Publishers Inc., 2000: 663-670.
13 ZIEBART B D, MAAS A, BAGNELL J A, et al. Maximum entropy inverse reinforcement learning[C]∥ Proceedings of the National Conference on Artificial Intelligence.Washington, D.C.: AAAI, 2008: 1433-1438.
14 AGHASADEGHI N, BRETL T. Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals[C]∥ 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Press, 2011: 1561-1566.
15 GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[DB/OL]. arXiv preprint: 1406.2661, 2014.
16 FINN C, CHRISTIANO P, ABBEEL P, et al. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models[DB/OL]. arXiv preprint: 1611.03852, 2016.
17 BING Z S, LEMKE C, CHENG L, et al. Energy-efficient and damage-recovery slithering gait design for a snake-like robot based on reinforcement learning and inverse reinforcement learning[J]. Neural Networks2020129: 323-333.
18 LI F J, WAGNER J, WANG Y E. Safety-aware adversarial inverse reinforcement learning for highway autonomous driving[J]. Journal of Autonomous Vehicles and Systems20211(4): 041004.
19 FEDERICI L, BENEDIKTER B, ZAVOLI A. Machine learning techniques for autonomous spacecraft guidance during proximity operations:AIAA-2021-0668[R]. Reston: AIAA, 2021.
20 CLOHESSY W H, WILTSHIRE R S. Terminal guidance system for satellite rendezvous[J]. Journal of the Aerospace Sciences196027(9): 653-658.
21 袁亚湘, 孙文瑜. 最优化理论与方法[M]. 北京: 科学出版社, 1997.422-426.
  YUAN Y X, SUN W Y. Optimization theory and method[M]. Beijing: Science Press, 1997. 422-426 (in Chinese).
22 陈希亮, 曹雷, 何明, 等. 深度逆向强化学习研究综述[J]. 计算机工程与应用201854(5): 24-35.
  CHEN X L, CAO L, HE M, et al. Overview of deep inverse reinforcement learning[J]. Computer Engineering and Applications201854(5): 24-35 (in Chinese).
23 SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[DB/OL]. arXiv preprint: 1707.06347, 2017.
Outlines

/