Skip to content

DongChen06/MACACC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-agent Reinforcement Learning for Cooperative Adaptive Cruising Control (MACACC)

This repo implements the state-of-the-art MARL algorithms for Cooperative Adaptive Cruising Control, with observability and communication of each agent limited to its neighborhood. For fair comparison, all algorithms are applied to A2C agents, classified into two groups: IA2C contains non-communicative policies which utilize neighborhood information only, whereas MA2C contains communicative policies with certain communication protocols.

Available IA2C algorithms:

Available MA2C algorithms:

Available CACC scenarios:

  • CACC Catch-up: Cooperative adaptive cruise control for catching up the leadinig vehicle.
  • CACC Slow-down: Cooperative adaptive cruise control for following the leading vehicle to slow down.

Requirements & Libs

  • use conda to establish a conda environment: conda create --name py38 python=3.8 -y, then activate the environment
  • install basic requirements: pip install -r requirements.txt

Usages

First define all hyperparameters (including algorithm and DNN structure) in a config file under [config_dir] (examples), and create the base directory of each experiement [base_dir]. Other comments:

  • For each scenario (e.g., Catchup and Slowdown), there are corresponding config files.
  • To change the number of CAVs in the platoon, you can change n_vehicle in the config file
  1. To train a new agent, run
python main.py --base-dir [base_dir] train --config-dir [config_dir]

Training config/data and the trained model will be output to [base_dir]/data and [base_dir]/model, respectively.

  1. To access tensorboard during training, run
tensorboard --logdir=[base_dir]/log
  1. To evaluate a trained agent, run
python main.py --base-dir [base_dir] evaluate

Evaluation data will be output to [base_dir]/eva_data. Make sure evaluation seeds are different from those used in training.

  1. MACACC (i.e., ia2c_qconsenet) is defined in agents/models/QConseNet. There are several arguments in models.py need to be changed
  • self.r (Lines 163-164) represents the l_{inf} norm of x, thus we need to change to according to different scenarios. For example, if you want to run Slowdown, then uncomment Line 163 instead.
  • There are several update strategies in models.py (Lines 263-330), Original represent the ConseNet update by taking averages; MACACC represents non-quantization; QMACACC (n) represent the quantized version of MACACC
  • The resolution of quantization is controlled by bi = self._quantization(wt, k, n=1) (Line 314)

Acknowledgement

This codes highly depends on Dr. Chu's codes, please give credits to him at: deeprl_network

About

Communication-efficient MARL for CACC

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages