DeepView.Profile API is a RESTful service wrapper around DeepView.Profile which is profiling tool of PyTorch neural networks.
DeepView.Profile API has same requirements as DeepView.Profile.
It works NVIDIA GPU environments with CUDA. CUDA support by PyTorch and drivers must start fom 11.7+ version.
if you have installed NVIDIA drivers you may check CUDA version using following command:
nvidia-smi
git clone https://github.com/cpavel/DeepView.Profile-API.git
cd deepview_profile_api
pip install -r requirements.txt
You may create docker image to run project in container. First clone project from repo, then run following command from the root folder of the repo
docker build --pull --rm . -f "docker/Dockerfile" -t deepview/service:latest
Set DEBUG to True if necessary, either in .env file or in system environment variable
export DEBUG=True
Run Django development server from repo root:
python manage.py runserver 0.0.0.0:80
You may change port if necessary.
If Debug=False, you may need to add runserver --insecure param to see Swagger UI
Make sure have installed NVIDIA Container Toolkit first. If you have troubles with setting up your host machine, you may read How to Install PyTorch on the GPU with Docker.
Run container
docker run -it --rm --gpus all -v $(pwd):/app/ -p 80:80 deepview/service
- Production server must not use Django development web server
- You will need to route Django static files to be able see Swagger UI
- Don't forget to check final settings by
python manage.py check --deploy
- Check if swagger page is working (http://127.0.0.1:80/swagger if running locally)
- Try "status" endpoint to see if everything is ok
- Try "profile" endpoint with one of the DeepView.Profile/examples
Black v2023+ is used for code formatting Ruff v0.0.292+ is used for code linting