Skip to content

jnzhang233/INSPIRE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

INSPIRE: Individualized and Neighbor-based Sharing Prioritized Experience Replay for Multi-Agent Reinforcement Learning

Overview

This repository contains the official implementation of INSPIRE (Individualized and Neighbor-based Sharing Prioritized Experience Replay), a framework for efficient experience exchange in sparse-reward multi-agent reinforcement learning (MARL). INSPIRE is designed to enhance training efficiency and generalization performance under sparse reward conditions by combining:

  • Experience decomposition to partition team experiences into individualized replay buffers, reducing irrelevant information.

  • Neighbor discovery to restrict communication to local neighborhoods, improving contextual relevance and scalability.

  • Neighbor Experience Transmitter (NET) to evaluate and selectively share high-value experiences using feedback from nearby agents.

  • Experience receiver filtering to retain only the most informative experiences, mitigating overfitting and noise propagation.

To evaluate effectiveness and robustness, we conduct experiments under sparse-reward settings across SMAC, SMACv2, and GRF benchmarks. Results show that INSPIRE achieves up to 13.06% higher win rates on SMAC, 12.93% on SMACv2, and 37.92% on GRF compared to five state-of-the-art baselines, while converging faster and maintaining superior sample efficiency. Extensive ablation studies further confirm the contribution of each component and the scalability of the framework.

Instructions

We have retained the running code for INSPIRE and the baseline algorithms. The running modes are as follows:

python src/main.py --config=[Algorithm name] --env-config=[Env name] with env_args.map_name=[Map name]
for example:
(INSPIRE)python src/main.py --config=inspire_qmix --env-config=sc2 with env_args.map_name=2s3z

The config files are all located in src/config/algs.

Prerequisites

The Python environment includes all dependencies required to run GraphSem as well as all baseline comparison algorithms.

If you encounter any issues while downloading SMAC, SMACV2, and GRF, you can refer to PymarlZoo or refer to the corresponding GitHub repository's issue. Additionally, apart from the basic extension package dependencies, the project can still operate normally even if individual environments are not configured.

Set up StarCraft II and SMAC:

bash install_sc2.sh

Set up Google Research Football: Follow the instructions in GRF .

Set up SMACV2: Follow the instructions in SMACV2 .

Set up other packages:

pip install -r requirements.txt

Acknowledgement

The code is implement based on the following open-source projects:

About

INSPIRE: Individualized and Shared Prioritized Experience Replay for Sparse Reward Multi-agent Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors