Process that runs on all deepdrive evaluation workers responisble for managing local docker container running problems and bots for Deepdrive problems in Botleague.
Process is self-updating from the production branch on GitHub.
Worker is triggered via setting the worker's instance id in a job on Firestore.
This process runs in a docker container and starts the sim or bot container via mapping the docker socket.
Instance managed by the problem-endpoint
Python 3.7+ Docker
Start the NVIDIA GCP instance described here
mkdir ~/.gcpcreds
# On local machine
gcloud compute scp ~/.gcpcreds/your-creds.json <your-problem-worker>:~/.gcpcreds/your-creds.json
# On server
sudo mkdir /root/.gcpcreds
sudo cp /home/your-user-dir/.gcpcreds/your-creds.json /root/.gcpcreds/
# Clone repo
cd /usr/local/src
sudo git clone https://github.com/deepdrive/problem-worker --branch production
# Perform initial run
sudo su
cd problem-worker
make run
docker ps
docker logs <your-new-container-name> -f
If everything looks good after 10 seconds, the container will run on boot and restart if it dies. c.f. docker restart
Now stop the instance (leave the container running so it restarts) and create an image to fully bake your new eval VM! You can do this in the UI under images using the disk you pulled the image onto as the source disk.
Note that from now on, the source will be automatically updated with the auto_updater using git. There's no need to rebuild the container even if the python dependencies change (since we install requirements.txt and git pull latest on start and will restart if requirements.txt changes).
If you do need to update the container, you'll have to bake a new VM image
and reference that image in problem-coordinator worker_instance_create.json
.