TorchServe is a flexible and easy to use tool for serving PyTorch models.
For full documentation, see Model Server for PyTorch Documentation.
Conda instructions are provided in more detail, but you may also use pip
and virtualenv
if that is your preference.
Note: Java 11 is required. Instructions for installing Java 11 for Ubuntu or macOS are provided in the Install with Conda section.
-
Install Java 11
sudo apt-get install openjdk-11-jdk
-
Use
pip
to install TorchServe and the model archiver:pip install torch torchtext torchvision sentencepiece psutil future pip install torchserve torch-model-archiver
Note: For Conda, Python 3.8 is required to run Torchserve
-
Install Java 11
sudo apt-get install openjdk-11-jdk
-
Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html)
-
Create an environment and install torchserve and torch-model-archiver For CPU
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision -c pytorch -c powerai
For GPU
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision cudatoolkit=10.1 -c pytorch -c powerai
-
Activate the environment
source activate torchserve
-
Optional if using torchtext models
pip install sentencepiece
-
Install Java 11
brew tap AdoptOpenJDK/openjdk brew cask install adoptopenjdk11
-
Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/macos.html)
-
Create an environment and install torchserve and torch-model-archiver
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision -c pytorch -c powerai
-
Activate the environment
source activate torchserve
-
Optional if using torchtext models
pip install sentencepiece
Now you are ready to package and serve models with TorchServe.
If you plan to develop with TorchServe and change some of the source code, you must install it from source code.
-
Install Java 11
sudo apt-get install openjdk-11-jdk
-
Install dependencies
pip install psutil future -y
-
Clone the repo
git clone https://github.com/pytorch/serve cd serve
-
Make your changes executable
pip install -e .
- To develop with torch-model-archiver:
cd serve/model-archiver
pip install -e .
- To upgrade TorchServe or model archiver from source code and make changes executable, run:
pip install -U -e .
For information about the model archiver, see detailed documentation.
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already installed TorchServe and the model archiver.
To run this example, clone the TorchServe repository:
git clone https://github.com/pytorch/serve.git
Then run the following steps from the parent directory of the root of the repository.
For example, if you cloned the repository into /home/my_path/serve
, run the steps from /home/my_path
.
To serve a model with TorchServe, first archive the model as a MAR file. You can use the model archiver to package a model. You can also create model stores to store your archived models.
-
Create a directory to store your models.
mkdir model_store
-
Download a trained model.
wget https://download.pytorch.org/models/densenet161-8d451a50.pth
-
Archive the model by using the model archiver. The
extra-files
param uses fa file from theTorchServe
repo, so update the path if necessary.torch-model-archiver --model-name densenet161 --version 1.0 --model-file ./serve/examples/image_classifier/densenet_161/model.py --serialized-file densenet161-8d451a50.pth --export-path model_store --extra-files ./serve/examples/image_classifier/index_to_name.json --handler image_classifier
For more information about the model archiver, see Torch Model archiver for TorchServe
After you archive and store the model, use the torchserve
command to serve the model.
torchserve --start --ncs --model-store model_store --models densenet161.mar
After you execute the torchserve
command above, TorchServe runs on your host, listening for inference requests.
Note: If you specify model(s) when you run TorchServe, it automatically scales backend workers to the number equal to available vCPUs (if you run on a CPU instance) or to the number of available GPUs (if you run on a GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs). This start up and autoscaling process might take considerable time. If you want to minimize TorchServe start up time you avoid registering and scaling the model during start up time and move that to a later point by using corresponding Management API, which allows finer grain control of the resources that are allocated for any particular model).
To test the model server, send a request to the server's predictions
API.
Complete the following steps:
- Open a new terminal window (other than the one running TorchServe).
- Use
curl
to download one of these cute pictures of a kitten and use the-o
flag to name itkitten.jpg
for you. - Use
curl
to sendPOST
to the TorchServepredict
endpoint with the kitten's image.
The following code completes all three steps:
curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl -X POST http://127.0.0.1:8080/predictions/densenet161 -T kitten.jpg
The predict endpoint returns a prediction response in JSON. It will look something like the following result:
[
{
"tiger_cat": 0.46933549642562866
},
{
"tabby": 0.4633878469467163
},
{
"Egyptian_cat": 0.06456148624420166
},
{
"lynx": 0.0012828214094042778
},
{
"plastic_bag": 0.00023323034110944718
}
]
You will see this result in the response to your curl
call to the predict endpoint, and in the server logs in the terminal window running TorchServe. It's also being logged locally with metrics.
Now you've seen how easy it can be to serve a deep learning model with TorchServe! Would you like to know more?
To stop the currently running TorchServe instance, run the following command:
torchserve --stop
You see output specifying that TorchServe has stopped.
- docker - Refer to the official docker installation guide
- git - Refer to the official git set-up guide
- TorchServe source code. Clone and enter the repo as follows:
git clone https://github.com/pytorch/serve.git
cd serve
The following are examples on how to use the build_image.sh
script to build Docker images to support CPU or GPU inference.
To build the TorchServe image for a CPU device using the master
branch, use the following command:
./build_image.sh
To create a Docker image for a specific branch, use the following command:
./build_image.sh -b <branch_name>
To create a Docker image for a GPU device, use the following command:
./build_image.sh --gpu
To create a Docker image for a GPU device with a specific branch, use following command:
./build_image.sh -b <branch_name> --gpu
To run your TorchServe Docker image and start TorchServe inside the container with a pre-registered resnet-18
image classification model, use the following command:
./start.sh
- Full documentation on TorchServe
- Manage models API
- Inference API
- Package models for use with TorchServe
We welcome all contributions!
To learn more about how to contribute, see the contributor guide here.
To file a bug or request a feature, please file a GitHub issue. For filing pull requests, please use the template here. Cheers!