导航

Acta Aeronautica et Astronautica Sinica

    Next Articles

Research on Key Technologies for Massive Unsteady Simulations of the Whole Compressor

  

  • Received:2023-11-13 Revised:2024-03-08 Online:2024-03-14 Published:2024-03-14

Abstract: The compressor of aero-engine often contains multiple ducts and up to a dozen compressor stages, and the full-annulus unsteady simulation technology is one of the means to improve the fidelity of its internal flow field simulation. The billions of grids of the compressor result in a huge amount of unsteady simulation computation, it is necessary to develop a highly scalable solver for the massive unsteady CFD simulation of the compressor. Based on the in-house software ASPAC, a universal solution for generating the initial field of the full-annulus grid after partitioning based on the single channel grid steady flow field is provided; analysis and testing were conducted on the international method of generating full-annulus wall distance based on single channel wall distance, and the deviation distribution area and magnitude caused by this method in unsteady simulation results were given; developed a grid overlap localization strategy for blade row interface considering grid distortion; the MPI/OpenMP hybrid parallel transformation of compressor unsteady simulation is conducted, and the problem of data competition during OpenMP parallel simulation is solved by optimizing the solution process. The results show that the developed method has been successfully applied to the full-annulus unsteady simulation of a twin-spool 13-stage compressor with 6.116 billion grids using 102,400 CPU cores. The MPI/OpenMP hybrid parallel mode has basically solved the load imbalance problem caused by the dynamic processing of the blade row interface interpolation. The parallel efficiency of the 102,400 cores compared to the 10,240 cores is 84.7%, far higher than the parallel efficiency of the MPI parallel mode of 46.7%.

Key words: twin-spool, 13-stage compressor, full-annulus unsteady, wall distance, MPI/OpenMP hybrid parallel

CLC Number: