导航

Acta Aeronautica et Astronautica Sinica

Previous Articles     Next Articles

Autonomous reentry guidance based on planning-correction hierarchical reinforcement learning

Gaoxiang Peng1,Bo Wang2,Lei Liu2, 2   

  1. 1. Huazhong University of Science and Technology
    2.
  • Received:2025-06-27 Revised:2025-11-09 Online:2025-11-10 Published:2025-11-10
  • Contact: Bo Wang

Abstract: To enhance the rapid response capability, mission adaptability, and robustness against significant model deviations during aerospace vehicle reentry, this study proposes an autonomous reentry guidance method based on planning-correction hierarchical reinforcement learning (HRL). Addressing the training instability issues in traditional HRL, a planning-correction hierarchical strategy is introduced to eliminate the dependence of upper-level policy training on lower-level state transition data, establishing a dual-layer guidance framework. In the planning layer, a modular RL policy is employed to plan reference angle-of-attack and bank angle profiles, generating global trajectories according to mission requirements to ensure the framework's adaptability. In the correction layer, high-frequency trajectory corrections under model parameter deviations are performed to mitigate the impact of large parameter deviations. Simulation results demonstrate that the dual-layer guidance strategy can handle larger parameter deviations and improve guidance accuracy under significant uncertainties. Compared to the predictor-corrector guidance algorithm, the proposed strategy exhibits superior mission adaptability and real-time performance, enabling autonomous guidance from arbitrary initial positions and orientations.

Key words: Aerospace vehicle, Reentry phase, Autonomous guidance, Hierarchical reinforcement learning, Planning-correction hierarchy

CLC Number: