导航

Acta Aeronautica et Astronautica Sinica

    Next Articles

Multi-agent Communication Cooperation Based on Deep Reinforcement Learning and Information Theory

  

  • Received:2023-11-10 Revised:2024-03-11 Online:2024-03-14 Published:2024-03-14
  • Contact: Zhe-Jie ZHANG
  • Supported by:
    The National Natural Science Foundation of China

Abstract: Effective explicit communication among agents in a multi-agent system can increase their capacity for cooperation. However, the existing communication strategies typically use the agents' local observations as the communication content directly and the communication objects are usually fixed with a certain topology structure. On the one hand, it’s difficult to adapt to changes in the tasks and environments, which causes uncertainty in the communication pro-cess; On the other hand, ignoring communication objects and content results in some resource waste and lower communication effectiveness. Aiming at the issues above, this paper proposes an approach that integrates deep rein-forcement learning and information theory to realize multi-agent adaptive communication mechanism. The approach uses a prior network to allow agent to dynamically choose the object, then utilizes the constraints of mutual infor-mation and information bottleneck theory to effectively filter redundant information. Finally, agent summarizes its own and received information to extract more effective information. The method is demonstrated to improve the stability and interaction efficiency of multi-agent systems compared to other methods through cooperative navigation and traf-fic junction environments.

Key words: Multi-agent Deep Reinforcement Learning, Mutual Information, Explicit Communication, Information Bottleneck, Cooperation environment

CLC Number: