Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update documentation to latest release 0.28.0. #2197

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,20 +48,20 @@ brew services stop djl-serving
For Ubuntu

```
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.27.0-1_all.deb
sudo dpkg -i djl-serving_0.27.0-1_all.deb
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.28.0-1_all.deb
sudo dpkg -i djl-serving_0.28.0-1_all.deb
```

For Windows

We are considering to create a `chocolatey` package for Windows. For the time being, you can
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.27.0.zip).
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.28.0.zip).

```
curl -O https://publish.djl.ai/djl-serving/serving-0.27.0.zip
unzip serving-0.27.0.zip
curl -O https://publish.djl.ai/djl-serving/serving-0.28.0.zip
unzip serving-0.28.0.zip
# start djl-serving
serving-0.27.0\bin\serving.bat
serving-0.28.0\bin\serving.bat
```

### Docker
Expand Down
4 changes: 2 additions & 2 deletions engines/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ The javadocs output is generated in the `build/doc/javadoc` folder.
## Installation
You can pull the Python engine from the central Maven repository by including the following dependency:

- ai.djl.python:python:0.27.0
- ai.djl.python:python:0.28.0

```xml
<dependency>
<groupId>ai.djl.python</groupId>
<artifactId>python</artifactId>
<version>0.27.0</version>
<version>0.28.0</version>
<scope>runtime</scope>
</dependency>
```
Expand Down
20 changes: 10 additions & 10 deletions serving/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ You can find different `compose-target` in `docker-compose.yml`, like `cpu`, `lm

## Run docker image

You can find DJL latest release docker image on [dockerhub](https://hub.docker.com/r/deepjavalibrary/djl-serving/tags?page=1&name=0.27.0).
You can find DJL latest release docker image on [dockerhub](https://hub.docker.com/r/deepjavalibrary/djl-serving/tags?page=1&name=0.28.0).
DJLServing also publishes nightly publish to the [dockerhub nightly](https://hub.docker.com/r/deepjavalibrary/djl-serving/tags?page=1&name=nightly).
You can just pull the image you need from there.

Expand All @@ -29,55 +29,55 @@ Here are a few examples to run djl-serving docker image:
### CPU

```shell
docker pull deepjavalibrary/djl-serving:0.27.0
docker pull deepjavalibrary/djl-serving:0.28.0

mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.27.0
docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.28.0
```

### GPU

```shell
docker pull deepjavalibrary/djl-serving:0.27.0-pytorch-gpu
docker pull deepjavalibrary/djl-serving:0.28.0-pytorch-gpu

mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --runtime=nvidia --shm-size 2g -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.27.0-pytorch-gpu
docker run -it --runtime=nvidia --shm-size 2g -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.28.0-pytorch-gpu
```

### AWS Inferentia

```shell
docker pull deepjavalibrary/djl-serving:0.27.0-pytorch-inf2
docker pull deepjavalibrary/djl-serving:0.28.0-pytorch-inf2

mkdir models
cd models

curl -O https://resources.djl.ai/test-models/pytorch/resnet18_inf2_2_4.tar.gz
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.27.0-pytorch-inf2
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.28.0-pytorch-inf2
```

### aarch64 machine

```shell
docker pull deepjavalibrary/djl-serving:0.27.0-aarch64
docker pull deepjavalibrary/djl-serving:0.28.0-aarch64

mkdir models
cd models

curl -O https://resources.djl.ai/test-models/pytorch/resnet18_inf2_2_4.tar.gz
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.27.0-aarch64
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.28.0-aarch64
```

## Run docker image with custom command line arguments

You can pass command line arguments to `djl-serving` directly when you using `docker run`

```
docker run -it --rm -p 8080:8080 deepjavalibrary/djl-serving:0.27.0 djl-serving -m "djl://ai.djl.huggingface.pytorch/sentence-transformers/all-MiniLM-L6-v2"
docker run -it --rm -p 8080:8080 deepjavalibrary/djl-serving:0.28.0 djl-serving -m "djl://ai.djl.huggingface.pytorch/sentence-transformers/all-MiniLM-L6-v2"
```
2 changes: 1 addition & 1 deletion wlm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can pull the server from the central Maven repository by including the follo
<dependency>
<groupId>ai.djl.serving</groupId>
<artifactId>wlm</artifactId>
<version>0.27.0</version>
<version>0.28.0</version>
</dependency>
```