This repo implements our paper:
Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, and Senthilnath Jayavelu, “Cross-Problem Learning for Solving Vehicle Routing Problems”, in International Joint Conferences on Artificial Intelligence (IJCAI), 2024.
Please cite our paper if the code is useful for your project.
@inproceedings{
lin2024cross,
title={Cross-Problem Learning for Solving Vehicle Routing Problems},
author={Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, and Senthilnath Jayavelu},
booktitle = {International Joint Conferences on Artificial Intelligence},
year={2024}
}
- Python>=3.8
- NumPy
- SciPy
- PyTorch>=1.7
- tqdm
- tensorboard_logger
- Matplotlib (optional, only for plotting)
We typically get 3 additional options when run training and evaluating, that is: "finetune_ways","rank","activation_func" The "finetune_ways" is to set the training ways, if set to "normal", it's for full-finetuning and from-scratch in paper; if set to "inside_tuning", you should use with "activation_func" to select the type of activations in adapters; if set to "lora", you should use with "rank" to set the rank for LoRA module; if set to "side_tuning", if's for side-tuning in paper.
Training data is generated on the fly. To generate validation and test data (same as used in the paper) for all problems:
python generate_data.py --problem all --name validation --seed 4321
python generate_data.py --problem all --name test --seed 1234
For training OP instances with 20 nodes and using rollout as REINFORCE baseline and using the generated validation set. and load weight from TSP 20 pretrain model and do the full fine-tuning:
python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_full_finetuning' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways normal --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt
For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the lora fine-tuning:
python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_lora' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways lora --rank 2 --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt
For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the side-tuning:
python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_side_tuning' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways side_tuning --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt
For training OP instances with 20 nodes and load weight from TSP 20 pretrain model and do the inside-tuning with leakyrelu activation:
python run.py --graph_size 20 --baseline rollout --data_distribution const --run_name 'op20_rollout_inside_tuning_leakyrelu' --val_dataset data/op/op_const20_validation_seed4321.pkl --finetune_ways inside_tuning --activation_func leakyrelu --load_path pretrain_checkpoints/tsp20/tsp20_pretrain/epoch-99.pt
For others settings, you could change the parameters accordingly.
To evaluate a model, you can use eval.py
, which will additionally measure timing and save the results
Note that, you need to add the additional parameters like training, to specify the model type:
python eval.py data/op/op_const20_test_seed1234.pkl --model pretrain_checkpoints/op20/op_full_finetuning --finetune_ways normal --epochs 99 --decode_strategy greedy
To report the best of 1280 sampled solutions, use
python eval.py data/op/op_const20_test_seed1234.pkl --model pretrain_checkpoints/op20/op_full_finetuning --finetune_ways normal --epochs 99 --decode_strategy sample --width 1280 --eval_batch_size 1
Baselines for different problems are within the corresponding folders and can be ran (on multiple datasets at once) as follows
python -m problems.tsp.tsp_baseline farthest_insertion data/tsp/tsp20_test_seed1234.pkl data/tsp/tsp50_test_seed1234.pkl data/tsp/tsp100_test_seed1234.pkl
To run baselines, you need to install Compass by running the install_compass.sh
script from within the problems/op
directory and Concorde using the install_concorde.sh
script from within problems/tsp
. LKH3 should be automatically downloaded and installed when required. To use Gurobi, obtain a (free academic) license and follow the installation instructions.
You could run the command bellow or see the comments in options.py or eval.py
python run.py -h
python eval.py -h