ExNext: Self-Explainable Next POI Recommendation, ACM SIGIR Conference on Research and Development in Information Retrieval, 2024
Install dependencies in requirements.txt
:
pip install -r requirements.txt
This is the environment when we trained our model.
- SYSTEM: Ubuntu 22.04.3 LTS
- GPU: NVIDIA RTX 4090
- CPU: 13th Gen Intel(R) Core(TM) i7-13700KF
```
python==3.10.12
torch==2.1.1
tqdm==4.66.1
pyyaml==6.0.1
pandas==2.1.3
numpy==1.26.2
tensorboard==2.15.1
scikit-learn==1.3.2
shapely==2.0.2
```
For our explainable model, We configure the learning rate as 1𝑒 −4 and the epoch of 20, while keeping 𝛽 constant at 1𝑒 −2 across all three datasets.
The training and test datasets of the system are stored in the data/raw
folder, including three raw datasets NYC, TKY and CA.
If we first train the model with 'main.py', it will generate a processed folder data/processed
store the processed datasets. After that, the model will be trained to use the processed files directly and skip the data preprocessing step.
In particular, for the CA dataset, we generated the raw dataset using the following command
python pre/generate_ca.py
Here we introduce where and how our data comes from. Our dataset was sourced from STHGCN,
thanks for all the data providers.
- NYC:
- TKY:
- CA:
- Raw: http://snap.stanford.edu/data/loc-gowalla.html;
- Category information: https://www.yongliu.org/datasets.html
Train the model using python main.py
. All hyper-parameters are defined in conf
.
We can reproduce the performance of our model with the script below. Please choose 'nyc', 'tky',
or 'ca' for {dataset_name}.
python main.py -f {dataset_name}.yml