TorchServe (PyTorch model server) is a flexible and easy to use tool for serving deep learning models exported from PyTorch.
Use the TorchServe CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.
Full installation instructions are in the project repo: https://github.com/pytorch/serve/blob/master/README.md
You can check the latest source code as follows:
git clone https://github.com/pytorch/serve.git
If you use torchserve in a publication or project, please cite torchserve: https://github.com/pytorch/serve