This repository is the official Learning to Dispatch and Reposition (LDR) Competition submission template and starter kit! Clone this to make a new submission!
Note: we recommend keeping your clone up-to-date with the upstream before your submission to make sure you receive the latest information about the competition. You could also watch
this repo so that you are notified whenever there are new updates.
- Quickstart
- Implement your own dispatch and reposition agent
- Test your agent locally
- Get ready for submission
- Development tips
Other Resources:
- DiDi Official Announcement - Background and overview.
- LDR Competition Page - Main registration page & leaderboard
- GAIA Open Dataset - Download the competition dataset
Contact
Please contact the organizers (kddcup2020@didiglobal.com) if you have any problem concerning this challenge.
.
├── samples # The sample data illustrating the api of your agent required by the simulator.
├── local_test.py # A demo of how your agent will be used during simulation.
├── run_local.sh # Run test in the simulation environment.
├── environment.yml # The simulation environment specified in a conda environment file.
├── Dockerfile # The simulation environment specified in a Dockerfile.
├── README # The readme file.
└── model # IMPORTANT: Your submission folder.
└── agent.py # IMPORTANT: Your implementation of the dispatch and reposition.
Clone this repo. Create your submission bundle by zipping the whole model
folder. Make sure no extra directories are created within the zip, e.g., zip -j submission.zip model/*
. And head over to the competition website for your first submission!
A LDR agent is equipped with two performable actions, dispatch
and reposition
, which receive observations
from the environment and compute order-driver assignment and repositioning destinations for the drivers.
class Agent(object):
""" Agent for dispatching and reposition """
def __init__(self):
""" Load your trained model and initialize the parameters """
pass
def dispatch(self, dispatch_observ):
""" Compute the assignment between drivers and passengers at each time step
:param dispatch_observ: a list of dict, the key in the dict includes:
order_id, int
driver_id, int
order_driver_distance, float
order_start_location, a list as [lng, lat], float
order_finish_location, a list as [lng, lat], float
driver_location, a list as [lng, lat], float
timestamp, int
order_finish_timestamp, int
day_of_week, int
reward_units, float
pick_up_eta, float
:return: a list of dict, the key in the dict includes:
order_id and driver_id, the pair indicating the assignment
"""
pass
def reposition(self, repo_observ):
""" Compute the reposition action for the given drivers
:param repo_observ: a dict, the key in the dict includes:
timestamp: int
driver_info: a list of dict, the key in the dict includes:
driver_id: driver_id of the idle driver in the treatment group, int
grid_id: id of the grid the driver is located at, str
day_of_week: int
:return: a list of dict, the key in the dict includes:
driver_id: corresponding to the driver_id in the od_list
destination: id of the grid the driver is repositioned to, str
"""
pass
Look into the agent.py
file inside the model
folder for more details. The agent.py
implements a default policy and is provided for you to base your submission. The model
folder must contain all your submitted files including the agent.py
and its dependencies.
During online evaluation the simulator will look for the agent.py
file inside your submission bundle and import your Agent
class. A valid submission requires
- An
agent.py
file in the first directory level of your submission bundle after unzipped; - The
Agent
class structure and function signatures kept unchanged as documented above.
We suggest developing your agent inside the model
folder. To make sure your agent run correctly in the online evaluation environment, you just need to run
./run_local.sh
It will launch a docker environment, import your model and call your agent on a sample dataset provided for you as a quick test before the submission.
In particular, the local_test.py
gives an example of how your submission will be used in the simulation and the Dockerfile
describes the environment where your agent.py
will be executed.
When you are ready to submit, zip the model
folder while making sure no extra directories are created within the zip, e.g., go inside the model
folder and run zip -r ../submission.zip . -x '*.git*' -x '*__pycache__*'
which creates your submission bundle submission.zip
just outside of the model
folder.
Finally head over to the competition website and see how your algorithm performs!
Suppose your model
directory looks like this, where modelfile
resides next to the agent.py
inside the model
folder
├── model
└── agent.py
└── modelfile
You can use code like below to load the modelfile
into your agent
import os
MODEL_PATH = os.path.join(
os.path.dirname(os.path.abspath(__file__)), 'modelfile')
class Agent(object):
def __init__(self):
self._load(MODEL_PATH)
def _load(self, modelpath):
""" Implement your model loading routine """
pass
We currently do not provide stack trace for security reasons. We do provide error messages with error types defined for each case involving the Agent
:
InitAgentError
when there are exceptions fromAgent.__init__
DispatchAgentError
when there are exceptions fromAgent.dispatch
RepositionAgentError
when there are exceptions fromAgent.reposition