Special Topic: Deep Space Optoelectronic Measurement and Intelligent Awareness Technology

Control of lunar landers based on secure reinforcement learning

  • Min YANG ,
  • Guanjun LIU ,
  • Ziyuan ZHOU
Expand
  • Department of Computer Science and Technology,Tongji University,Shanghai 201804,China

Received date: 2024-04-19

  Revised date: 2024-05-07

  Accepted date: 2024-07-24

  Online published: 2024-08-20

Supported by

National Natural Science Foundation of China(62172299);Space Optoelectronic Measurement and Perception Lab., Beijing Institute of Control Engineering(LabSOMP-2023-03);The Fundamental Research Funds for the Central Universities(2023-4-YB-05);Shanghai Technological Innovation Action Plan(22511105500)

Abstract

In lunar landing missions, the lander must perform precise operations in extreme environments and often faces the challenge of communication delays. These factors severely limit the real-time operation capabilities of ground control. In response to these challenges, this study proposes a Deep Reinforcement Learning (DRL) framework for safety enhancement based on the Semi-Markov Decision Process (SMDP) to improve the operational safety of autonomous spacecraft landing. To compress the state space and maintain the key characteristics of the decision-making process, this framework compresses the Markov Decision Process (MDP) of the historical trajectory into a SMDP, and constructs an abstract SMDP state transition diagram based on the compressed trajectory. Then, the key state-action pairs of potential risks are identified, and the real-time monitoring and intervention strategy is implemented. The framework effectively improves the safety of the spacecraft’s autonomous landing. Furthermore, the reverse breadth first search method is used to search for the state-action pairs that have decisive impact on task results, and real-time adjustment of the model is realized through the built state-action monitor. Experimental results show that this framework increases the mission success rate of the lunar lander by up to 22% in a simulated environment on the pre-trained Deep Q-Network (DQN), Dueling DQN, and DDQN models without adding additional sensors or significantly changing the existing system configuration. According to the preset safety evaluation standards, the framework can improve safety by up to 42%. In addition, simulation results in a virtual environment demonstrate the practical application potential of this framework in complex space missions such as lunar landing, which can effectively improve operational safety and efficiency.

Cite this article

Min YANG , Guanjun LIU , Ziyuan ZHOU . Control of lunar landers based on secure reinforcement learning[J]. ACTA AERONAUTICAET ASTRONAUTICA SINICA, 2025 , 46(3) : 630553 -630553 . DOI: 10.7527/S1000-6893.2024.30553

References

1 SMIRNOV N N. Safety in space[J]. Acta Astronautica2023204: 679-681.
2 TIPALDI M, IERVOLINO R, MASSENIO P R. Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges[J]. Annual Reviews in Control202254: 1-23.
3 LORENZ R D. Planetary landings with terrain sensing and hazard avoidance: A review[J]. Advances in Space Research202371(1): 1-15.
4 XIA Y Q, CHEN R F, PU F, et al. Active disturbance rejection control for drag tracking in Mars entry guidance[J]. Advances in Space Research201453(5): 853-861.
5 DAI J, XIA Y Q. Mars atmospheric entry guidance for reference trajectory tracking[J]. Aerospace Science and Technology201545: 335-345.
6 LONG J T, ZHU S Y, CUI P Y, et al. Barrier Lyapunov function based sliding mode control for Mars atmospheric entry trajectory tracking with input saturation constraint[J]. Aerospace Science and Technology2020106: 106213.
7 SHEN G H, XIA Y Q, ZHANG J H, et al. Adaptive fixed-time trajectory tracking control for Mars entry vehicle[J]. Nonlinear Dynamics2020102(4): 2687-2698.
8 DANG Q Q, GUI H C, LIU K, et al. Relaxed-constraint pinpoint lunar landing using geometric mechanics and model predictive control[J]. Journal of Guidance, Control, and Dynamics202043(9): 1617-1630.
9 邓云山, 夏元清, 孙中奇, 等. 扰动环境下火星精确着陆自主轨迹规划方法[J]. 航空学报202142(11): 524834.
  DENG Y S, XIA Y Q, SUN Z Q, et al. Autonomous trajectory planning method for Mars precise landing in disturbed environment[J]. Acta Aeronautica et Astronautica Sinica202142(11): 524834 (in Chinese).
10 KHALID A, JAFFERY M H, JAVED M Y, et al. Performance analysis of Mars-powered descent-based landing in a constrained optimization control framework[J]. Energies202114(24): 8493.
11 YUAN X, ZHU S Y, YU Z S, et al. Hazard avoidance guidance for planetary landing using a dynamic safety margin index[C]∥2018 IEEE Aerospace Conference. Piscataway: IEEE Press, 2018: 1-11.
12 D’AMBROSIO A, CARBONE A, SPILLER D, et al. PSO-based soft lunar landing with hazard avoidance: Analysis and experimentation[J]. Aerospace20218(7): 195.
13 SHAKYA A K, PILLAI G, CHAKRABARTY S. Reinforcement learning algorithms: A brief survey[J]. Expert Systems with Applications2023231: 120495.
14 ZHOU Z Y, LIU G J, TANG Y. Multi-agent reinforcement learning: methods, applications, visionary prospects, and challenges[DB/OL]. arXiv preprint: 2305.10091, 2023.
15 高锡珍, 汤亮, 黄煌. 深度强化学习技术在地外探测自主操控中的应用与挑战[J]. 航空学报202344(6): 026762.
  GAO X Z, TANG L, HUANG H. Deep reinforcement learning in autonomous manipulation for celestial bodies exploration: Applications and challenges[J]. Acta Aeronautica et Astronautica Sinica202344(6): 026762 (in Chinese).
16 MOHOLKAR U R, PATIL D D. Comprehensive survey on agent based deep learning techniques for space landing missions[J]. International Journal of Intelligent Systems and Applications in Engineering202412(16S): 188-200.
17 CHENG L, WANG Z B, JIANG F H. Real-time control for fuel-optimal Moon landing based on an interactive deep reinforcement learning algorithm[J]. Astrodynamics20193(4): 375-386.
18 HARRIS A, VALADE T, TEIL T, et al. Generation of spacecraft operations procedures using deep reinforcement learning[J]. Journal of Spacecraft and Rockets202259(2): 611-626.
19 MALI R, KANDE N, MANDWADE S, et al. Lunar lander using reinforcement learning algorithm[C]∥2023 7th International Conference on Computing, Communication, Control and Automation (ICCUBEA). Piscataway: IEEE Press, 2023: 1-5.
20 DHARRAO D, GITE S, WALAMBE R. Guided cost learning for lunar lander environment using human demonstrated expert trajectories[C]∥2023 International Conference on Advances in Intelligent Computing and Applications (AICAPS). Piscataway: IEEE Press, 2023: 1-6.
21 SHEN D L. Comparison of three deep reinforcement learning algorithms for solving the lunar lander problem[M]∥Advances in Intelligent Systems Research. Dordrecht: Atlantis Press International BV, 2024: 187-199.
22 GU S D, YANG L, DU Y L, et al. A review of safe reinforcement learning: Methods, theory and applications[DB/OL]. arXiv preprint: 2205.10330, 2022.
23 CHEN W Q, SUBRAMANIAN D, PATERNAIN S. Probabilistic constraint for safety-critical reinforcement learning[J]. IEEE Transactions on Automatic Control202469(10): 6789-6804.
24 SELIM M, ALANWAR A, EL-KHARASHI M W, et al. Safe reinforcement learning using data-driven predictive control[C]∥2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA). Piscataway: IEEE Press, 2022: 1-6.
25 BRUNKE L, GREEFF M, HALL A W, et al. Safe learning in robotics: From learning-based control to safe reinforcement learning[J]. Annual Review of Control, Robotics, and Autonomous Systems20225: 411-444.
26 JIN P, TIAN J X, ZHI D P, et al. Trainify: A CEGAR-driven training and verification framework for safe deep reinforcement learning[C]∥International Conference on Computer Aided Verification. Cham: Springer, 2022: 193-218.
27 ZHI D P, WANG P X, CHEN C, et al. Robustness verification of deep reinforcement learning based control systems using reward martingales[J]. Proceedings of the AAAI Conference on Artificial Intelligence202438(18): 19992-20000.
28 TAPPLER M, CóRDOBA F C, AICHERNIG B K, et al. Search-based testing of reinforcement learning[DB/OL]. arXiv preprint: 2205.04887, 2022.
29 TAPPLER M, PFERSCHER A, AICHERNIG B K, et al. Learning and repair of deep reinforcement learning policies from fuzz-testing data[C]∥Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. New York: ACM, 2024: 1-13.
30 WANG H N, LIU N, ZHANG Y Y, et al. Deep reinforcement learning: A survey[J]. Frontiers of Information Technology & Electronic Engineering202021(12): 1726-1744.
31 MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature2015518(7540): 529-533.
32 WANG Z Y, SCHAUL T, HESSEL M, et al. Dueling network architectures for deep reinforcement learning[C]∥Proceedings of the 33rd International Conference on International Conference on Machine Learni. New York: ACM, 201648: 1995-2003.
33 VAN HASSELT H, GUEZ A, SILVER D. Deep reinforcement learning with double Q-learning[J]. Proceedings of the AAAI Conference on Artificial Intelligence201630(1): 2094-2100.
34 BROCKMAN G, CHEUNG V, PETTERSSON L, et al. OpenAI gym[DB/OL]. arXiv preprint: 1606.01540, 2016.
35 GUO S Q, YAN Q, SU X, et al. State-temporal compression in reinforcement learning with the reward-restricted geodesic metric[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence202244(9): 5572-5589.
36 JIN P, WANG Y, ZHANG M. Efficient LTL model checking of deep reinforcement learning systems using policy extraction[C]∥The 34th International Conference on Software Engineering and Knowledge Engineering. San Francisco: KSI Research Inc., 2022: 357-362.
37 KORKMAZ E. Adversarial robust deep reinforcement learning requires redefining robustness[J]. Proceedings of the AAAI Conference on Artificial Intelligence202337(7): 8369-8377.
Outlines

/