Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate neuron Dockerfile in favor in DLC #1775

Merged
merged 10 commits into from
Aug 16, 2022
11 changes: 8 additions & 3 deletions docker/Dockerfile.neuron.dev
Original file line number Diff line number Diff line change
@@ -1,13 +1,18 @@
## THIS DOCKERFILE HAS BEEN DEPRECATED ##
# please refer to deep learning containers repository for torchserve containers
# to run on inferentia processors which has up to date drivers
# https://github.com/aws/deep-learning-containers/blob/master/available_images.md

# syntax = docker/dockerfile:experimental
#
# Following comments have been shamelessly copied from https://github.com/pytorch/pytorch/blob/master/Dockerfile
#
#
# NOTE: To build this you will need a docker version > 18.06 with
# experimental enabled and DOCKER_BUILDKIT=1
#
# If you do not use buildkit you are not going to have a good time
#
# For reference:
# For reference:
# https://docs.docker.com/develop/develop-images/build_enhancements/

ARG BASE_IMAGE=ubuntu:18.04
Expand Down Expand Up @@ -74,7 +79,7 @@ RUN if [ "$MACHINE_TYPE" = "gpu" ]; then export USE_CUDA=1; fi \
&& chown -R model-server /home/model-server \
&& cp docker/config.properties /home/model-server/config.properties \
&& mkdir /home/model-server/model-store && chown -R model-server /home/model-server/model-store \
&& pip install torch-neuron 'neuron-cc[tensorflow]' --extra-index-url=https://pip.repos.neuron.amazonaws.com
&& pip install torch-neuron 'neuron-cc[tensorflow]' --extra-index-url=https://pip.repos.neuron.amazonaws.com

EXPOSE 8080 8081 8082 7070 7071
USER model-server
Expand Down
9 changes: 6 additions & 3 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
### Deprecation notice:
maaquib marked this conversation as resolved.
Show resolved Hide resolved
[Dockerfile.neuron.dev](https://github.com/pytorch/serve/blob/master/docker/Dockerfile.neuron.dev) has been deprecated. Please refer to [deep learning containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) repository for neuron torchserve containers.

## Contents of this Document

* [Prerequisites](#prerequisites)
Expand All @@ -12,7 +15,7 @@
* For base Ubuntu with GPU, install following nvidia container toolkit and driver-
* [Nvidia container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)
* [Nvidia driver](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html)

* NOTE - Dockerfiles have not been tested on windows native platform.

## First things first
Expand Down Expand Up @@ -283,7 +286,7 @@ You may want to consider the following aspects / docker options when deploying t
The current ulimit values can be viewed by executing ```ulimit -a```. A more exhaustive set of options for resource constraining can be found in the Docker Documentation [here](https://docs.docker.com/config/containers/resource_constraints/), [here](https://docs.docker.com/engine/reference/commandline/run/#set-ulimits-in-container---ulimit) and [here](https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources)
* Exposing specific ports / volumes between the host & docker env.

* ```-p8080:8080 -p8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 ```
* ```-p8080:8080 -p8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 ```
TorchServe uses default ports 8080 / 8081 / 8082 for REST based inference, management & metrics APIs and 7070 / 7071 for gRPC APIs. You may want to expose these ports to the host for HTTP & gRPC Requests between Docker & Host.
* The model store is passed to torchserve with the --model-store option. You may want to consider using a shared volume if you prefer pre populating models in model-store directory.

Expand All @@ -298,5 +301,5 @@ docker run --rm --shm-size=1g \
-p8082:8082 \
-p7070:7070 \
-p7071:7071 \
--mount type=bind,source=/path/to/model/store,target=/tmp/models <container> torchserve --model-store=/tmp/models
--mount type=bind,source=/path/to/model/store,target=/tmp/models <container> torchserve --model-store=/tmp/models
```